
Explore more
Tech Stack
About the role
- Design, build, and maintain scalable, fault-tolerant data pipelines (batch and/or streaming) for core business and product data.
- Ingest data from diverse sources including APIs, databases, event streams, and third-party services, ensuring high data quality and reliability.
- Design and manage data models and storage layers (data warehouses, data lakes) that support analytics and downstream use cases.
- Partner with analytics, product, and engineering teams to deliver clean, well-documented datasets that enable self-service analytics and experimentation.
- Implement monitoring, logging, and alerting to ensure pipeline reliability, performance, and cost efficiency.
- Enforce data governance best practices, including access control, privacy, documentation, and data lineage.
Requirements
- 3+ years of experience as a Data Engineer, data-heavy Backend Engineer, or similar role.
- Strong programming skills in Python and/or TypeScript, with solid SQL proficiency.
- Good understanding of data modeling, ETL/ELT concepts, and analytics workflows.
- Hands-on experience with data warehouses (e.g. BigQuery, Snowflake).
- Experience building and operating production data pipelines.
Benefits
- Build and own data systems that operate at real scale
- Work on high-impact, business-critical data use cases
- High-ownership role with autonomy and trust
- Flat structure - execution and results matter more than titles
- Competitive compensation and strong growth opportunities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PythonTypeScriptSQLdata modelingETLELTdata pipelinesdata warehousesdata lakesanalytics workflows