
Explore more
Job Level
About the role
- Design, develop, and maintain robust, scalable data pipelines using Python and Airflow.
- Collaborate with Analytics Engineers and other team members to deliver clean, well-modeled data to downstream users.
- Implement monitoring, logging, and alerting to ensure data pipeline reliability and performance.
- Ensure data quality and integrity through validation frameworks and testing.
- Optimize performance and cost-efficiency of cloud-based data systems.
- Contribute to architectural decisions around data storage, workflow orchestration, and infrastructure as code.
- Document data pipelines and processes
Requirements
- Bachelor’s degree in computer science or related field
- Proven years of experience in a Data Engineering or Software Engineering role
- Very good knowledge of Python and experience with SQL
- Experience with Kubernetes and containerization
- Experience with Airflow or similar process orchestration tool
- Familiarity with modern data file formats (Parquet, ORC, JSON)
- Experience with data architecture, modelling, and pipelines in an enterprise data warehouse ( Snowflake, Redshift, BigQuery )
- Experience with AWS products (S3, Lambda, Athena)
- Ability to work both independently and in a team-oriented environment
- Strong analytical skills with a critical thinking attitude.
Benefits
- Health insurance
- Retirement plans
- Paid time off
- Flexible work arrangements
- Professional development
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonSQLKubernetesAirflowdata architecturedata modelingdata pipelinesAWSParquetORC
Soft Skills
analytical skillscritical thinkingteam-orientedindependent work
Certifications
Bachelor’s degree in computer science