Tech Stack
AirflowAmazon RedshiftBigQueryCloudCyber SecurityKafkaPythonScalaSparkSQL
About the role
- Build and maintain scalable, secure, and reliable data pipelines to support analytics, product insights, and machine learning initiatives.
- Contribute to data architecture decisions, balancing performance, reliability, and maintainability.
- Collaborate with product, engineering, and business stakeholders to deliver data solutions that drive strategic and operational impact.
- Promote data quality and governance by implementing best practices for validation, lineage tracking, and observability.
- Share knowledge and support the growth of peers through code reviews, documentation, and informal mentorship.
Requirements
- 5+ years of proven experience in data engineering, ideally in a fast-paced, product-focused environment.
- Solid understanding of cloud data platforms (e.g., Snowflake, BigQuery, Redshift) and orchestration tools (e.g., Airflow, dbt).
- Strong programming skills in Python or Scala, and proficiency in writing efficient SQL.
- Familiarity with real-time data processing frameworks (e.g., Kafka, Spark Streaming).
- Experience designing data models for analytics and machine learning use cases.
- A proactive mindset, a strong sense of ownership, and a passion for solving meaningful problems.