
Explore more
About the role
- Maintain and manage website scraping configurations using Python
- Monitor scraping configurations for errors and potential crashes.
- Oversee retrieved data to detect potential issues and blockages.
- Coordinate with stakeholders to understand scraping task requirements and report issues.
- Prepare and share periodic reports on scraping activities with stakeholders.
- Develop necessary pipelines to ingest data into the Datalake and perform required transformations.
Requirements
- Minimum 2 years of experience in a similar role.
- Proven experience in data engineering with expertise in designing and implementing scalable data architectures.
- Strong experience with ETL processes, data modeling, and data warehousing (Airflow & DBT preferred).
- Expertise in database technologies, both relational (SQL) and NoSQL.
- Knowledge of cloud platforms, particularly Azure.
- Solid understanding of data security measures and compliance standards.
- Excellent Python experience for data engineering and automation.
- Strong collaboration skills to work closely with data scientists and analysts.
- Ability to optimize data pipelines for performance and efficiency.
- Ability to build, test, and maintain tasks and projects.
- Experience with version control systems, such as Git.
- Hands-on experience with Airflow and/or DBT.
- Experience with Terraform for infrastructure management.
- Strong academic background in a relevant field.
- Fluent in English (French is a plus).
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonETL processesdata modelingdata warehousingSQLNoSQLdata engineeringdata pipelinesAirflowDBT
Soft Skills
collaborationcommunicationproblem-solvingreportingstakeholder managementoptimizationproject managementattention to detailadaptabilityteamwork