Salary
💰 $138,900 - $186,200 per year
Tech Stack
AirflowAmazon RedshiftBigQueryCloudDistributed SystemsETLHadoopHDFSPySparkPythonSparkSQL
About the role
- Build and maintain audience platform initiatives to improve and expand audience targeting capabilities.
- Partner with technical and non-technical colleagues to understand audience data and targeting requirements.
- Work with engineering teams to collect required data from internal and external systems.
- Support Agile methodologies such as Scrum by actively participating in regular ceremonies such as stand-up, retrospectives and sprint planning.
- Design table structures and define ETL pipelines to build performant and scalable solutions
- Develop Data Quality checks
- Participate in on-call support
- Develop and maintain ETL routines using ETL and orchestration tools such as Airflow
- Perform ad hoc analysis and SQL and ETL tuning as necessary.
- Participate in code reviews and problem-solving.
Requirements
- 5+ years of big data engineering experience modeling and developing large data pipelines
- Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data
- Strong Python and SQL skills processing big datasets
- Experience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query, Databricks)
- Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices
- API and backend development experience
- Bachelor's degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience.