
Senior Data Engineer
RoomPriceGenie
full-time
Posted on:
Location Type: Remote
Location: Germany
Visit company websiteExplore more
Job Level
About the role
- Design, build, and maintain scalable and reliable data pipelines using our modern data stack (Snowflake, Dagster, and dbt).
- Own end-to-end data flows, from ingestion services (Django & Celery) to analytics-ready models in the data warehouse.
- Contribute to the migration of legacy Django/Celery-based pipelines toward our modern data platform architecture.
- Collaborate closely with a Product Manager, Data Engineers, and Backend Engineers to prioritize integrations and deliver high-impact data capabilities.
- Ensure data quality, reliability, and observability through testing, monitoring, and clear documentation.
- Support multiple internal teams by providing accurate, timely, and well-documented reservation data they can trust.
- Continuously improve scalability, automation, and operational efficiency as data volume and integrations grow.
- Take ownership of features and improvements from design to production, including post-deployment monitoring and iteration.
Requirements
- 4+ years of professional Python experience, ideally in data engineering and/or backend systems.
- Strong experience building and maintaining ETL/ELT pipelines in production environments on modern cloud data warehouses (e.g. Snowflake, Databricks, BigQuery, Redshift).
- Strong experience in data modeling, including analytics-ready schema design, fact/dimension modeling, and performance optimization.
- Experience working with orchestrated data pipelines (e.g., Dagster, Airflow, or similar tools).
- Experience building backend or ingestion services using Python web frameworks such as Django, FastAPI, or Flask.
- Familiarity with background task processing and asynchronous workflows (e.g., Celery or similar systems).
- Experience working with cloud infrastructure, preferably AWS (e.g., S3, RDS/Aurora).
- Strong understanding of software design principles and data pipeline architecture.
- Experience working with large datasets using tools such as pandas, polars or NumPy.
- Excellent communication skills—you can explain complex technical topics clearly to both technical and non-technical stakeholders.
- High ownership mentality—you take responsibility for reliability, quality, and long-term maintainability.
- A collaborative, egoless team player who thrives in cross-functional environments.
- Fluent in English and comfortable participating in technical discussions.
- Based in or able to work within the European Time Zone (UTC+0 to UTC+2).
Benefits
- Remote-First Model: You can work flexibly from anywhere.
- One Team, One Vision, One Goal: We’re in this together!
- Epic Team Gatherings: Every year, we bring our global crew together.
- Growth and Development: We’re all about lifelong learning!
- 5 Years? 5 Weeks: After five years, you’ll earn incredible bonus vacation time.
- Birthday Celebrations: It’s your day, so take it off!
- Flexible Hours: We offer flexible working hours.
- Wellbeing Matters: Access to Headspace for mental health support.
- BetterHelp Support: Offers professional online therapy and counseling.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonETLELTdata modelinganalytics-ready schema designfact/dimension modelingperformance optimizationdata pipeline architecturelarge datasetsasynchronous workflows
Soft skills
communication skillsownership mentalitycollaborativeteam player