Building ETL/ELT pipelines in Python and SQL to process, among other things, hundreds of millions of tracking events daily for our analytics tools
Designing and implementing a modern data stack through continuous reevaluations focused on the efficiency, performance, and usability of the solutions in use
Designing and optimizing data modeling and data governance to ensure a clear and consistent data, analytics, and reporting environment
Monitoring and administering tools and databases in the existing data stack (including DWS (equivalent to Postgres) and dbt)
Requirements
You are passionate about large-scale data, high-performance processing, and technical innovation
Solid professional experience in data engineering distinguishes you, as well as experience in data analytics — experience with DWS / Redshift is a plus
Excellent knowledge of ETL/ELT, SQL, NoSQL, and experience implementing ETL jobs
Experience developing scalable cloud solutions (e.g., AWS or HWC)
Experience administering data visualization tools such as Tableau is an advantage
Strong analytical mindset, team-oriented work style, and strong communication skills
Benefits
Professional colleagues with several years of industry experience
Dynamic environment with flat hierarchies and fast decision-making, offering plenty of room for your own ideas
Flexible working hours and the possibility to work remotely (office presence expected 2 days per week)
Regular company events
Fruit, beverages, and meal allowances
Subsidy for the Deutschland-Ticket
Possible membership with Wellpass
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.