Salary
💰 $145,000 - $210,000 per year
Tech Stack
AirflowApacheBigQueryCloudDistributed SystemsETLGoogle Cloud PlatformJavaScriptKafkaNoSQLPostgresSQL
About the role
- Design and implement scalable data pipelines for ingesting, processing, and transforming large volumes of universal pixel and event data
- Build and maintain real-time and batch workflows using tools like Kafka, Airflow, and BigQuery
- Collaborate with engineers and product teams to ensure event data is captured accurately through our JavaScript-based universal pixel
- Own and optimize ELT processes to support reporting, analytics, and machine learning use cases
- Develop and maintain data models to support internal stakeholders and platform features
- Monitor pipeline health, implement anomaly detection, and maintain high data quality standards
- Contribute to the evolution of our cloud data infrastructure (built on GCP)
- Document data pipelines, models, and operational workflows for transparency and team knowledge sharing
- Promote and enforce best practices in data engineering, observability, and data governance
Requirements
- 5+ years of experience in software or data engineering, with a focus on building scalable data infrastructure
- Strong experience with data pipelining and modeling, including tools like Apache Airflow, Databricks, Snowflake and dbt
- In-depth knowledge of streaming technologies such as Apache Kafka
- Skilled in designing and maintaining ELT/ETL workflows using modern tooling
- Proficient in SQL and comfortable working with both relational and NoSQL databases (e.g., Postgres, Bigtable, Spanner)
- Experience working with cloud platforms, ideally GCP
- Familiarity with JavaScript and front-end tracking concepts, especially in non-browser environments like CTV
- Strong problem-solving and debugging skills, especially with distributed systems and large-scale event data
- Excellent collaboration and communication skills
- Bonus: experience in adtech, martech, or CTV attribution