Tech Stack
AirflowAmazon RedshiftAWSAzureBigQueryCloudETLGoogle Cloud PlatformPythonSQL
About the role
- Build and run the data platform powering product features and company insights
- Process billions of events monthly and manage 30TB+ data and 200+ data pipelines
- Work side by side with Senior Data Engineer and Senior Data Analyst to scale platform and deliver accurate, reliable, and well-structured data
- Contribute to embedding AI into teams and products
- Own and optimize ETL pipelines end-to-end (design, build, monitor)
- Optimize SQL/dbt models or ETL pipelines for performance and cost
- Collaborate with Analysts and Product squads to deliver datasets and reliable pipelines
- Improve observability (logging, alerting, monitoring) in the data stack
- Design and implement a real-time pipeline to serve the Product
- Drive initiatives around data reliability, scalability, cost optimization
- Help define the Data Engineering roadmap and lead design of new components
Requirements
- 2+ years of experience in a Data Engineering or in a similar role
- Experience with cloud providers (AWS, Azure, GCP)
- Proficient in SQL/Python, experience with an orchestration tool (e.g. Airflow) and a data warehouse (Snowflake, BigQuery, Redshift)
- Strong understanding of data modelling, warehousing principles, and performance optimization techniques
- Ability to break down ambiguous problems into concrete, manageable components
- Experience managing different types of interlocutors while exposing technical & business problems
- Listening skills: open to input from other team members and departments
- Fluent English (US/UK) / B2 level or equivalent (FR)
- Enthusiasm for working environment (link)