
Explore more
About the role
- Build and maintain robust, scalable data pipelines using Airflow, Python, and SQL.
- Design, build, optimize data pipelines using dbt, or sqlmesh.
- Develop and manage the ODS and support the eventual rollout of a modern Data Warehouse.
- Integrate data from internal systems and external APIs to create clean, reliable datasets.
- Work closely with engineers to operationalize machine learning workflows.
- Ensure high data quality through monitoring, validation, and error handling.
- Provide guidance to less experienced team members and champion data engineering best practices.
- Deploy and manage infrastructure in the cloud (AWS, GCP, or Azure) using modern DevOps tooling.
- Implement monitoring and alerting to ensure data pipelines are reliable and maintainable.
Requirements
- At least 5 years of experience in data engineering or related roles.
- Strong skills in Python, SQL, and Airflow or similar orchestration tools.
- Experience working with cloud infrastructure and data warehousing tools (e.g., Snowflake, BigQuery, Redshift).
- Exposure to ML pipelines or collaboration with ML/AI teams.
- Ability to work independently while supporting a less-experienced team.
- Strong communication skills and an eagerness to mentor and share knowledge.
- Experience building an ODS or Data Warehouse from scratch.
- Familiarity with event-driven systems or streaming tools (e.g., Kafka, Pub/Sub).
- DevOps experience or infrastructure-as-code (e.g., Terraform, CloudFormation).
Benefits
- Competitive compensation aligned with relevant experience.
- Remote-friendly, flexible work environment.
- Budget for learning, courses, and conferences.
- A supportive, mission-driven team eager to grow and learn together.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonSQLAirflowdbtsqlmeshdata warehousingODSmachine learning workflowsmonitoringerror handling
Soft Skills
communicationmentoringteam supportindependenceguidance