
Data Engineer – Programming, Data Pipelines
Haeger Consulting
full-time
Posted on:
Location Type: Hybrid
Location: Bonn • Germany
Visit company websiteExplore more
About the role
- Develop scalable ETL/ELT data pipelines for batch and streaming processing
- Work with Spark / PySpark, Python and SQL
- Implement data models based on Kimball, Data Vault or Lakehouse principles
- Transform, normalize and denormalize data from various source systems
- Orchestrate data pipelines using Airflow or comparable tools
- Work with Databricks, Snowflake or similar platforms
- Support data analytics and machine learning pipelines
- Ensure quality through testing, documentation and CI/CD
Requirements
- Several years of professional experience in the areas listed above
- Strong SQL skills
- Strong knowledge of Spark / PySpark
- Proficient in Python
- Experience with dbt and Airflow or comparable orchestration tools
- Comfortable working with Snowflake and/or Databricks
- Experienced in data modeling (dimensional modeling, Data Vault, normalization & denormalization)
Benefits
- Innovative projects and a creative environment
- Agile working environment
- Flexibility
- Freedom to shape your role
- Knowledge sharing and professional growth
- Team spirit
- Attractive benefits
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
ETLELTSparkPySparkPythonSQLdata modelingdimensional modelingnormalizationdenormalization