
Data Engineer, Google Cloud
NMI
full-time
Posted on:
Location Type: Remote
Location: United States
Visit company websiteExplore more
Salary
💰 $90,000 - $120,000 per year
About the role
- Build and maintain production-grade ELT pipelines that ingest data from internal applications, third-party SaaS tools, and event streams into our BigQuery data warehouse.
- Own specific data domains end-to-end — from raw ingestion through to marts — ensuring your areas of the warehouse are accurate, tested, and well-documented.
- Write and maintain dbt models, tests, macros, and documentation within our established dbt project conventions and code review process.
- Develop and manage Airflow DAGs on Cloud Composer or other similar tools to orchestrate data workflows, following patterns and standards set by the team.
- Implement data quality checks and monitoring to catch anomalies before they reach downstream consumers.
- Optimize BigQuery queries and models for cost and performance within your domain, escalating architectural tradeoffs to senior engineers when appropriate.
- Collaborate with analysts and stakeholders to translate business data needs into well-scoped pipeline and modeling tasks.
- Participate in on-call rotations, respond to pipeline incidents, and write clear postmortems.
- Contribute to team documentation and runbooks so that your work is maintainable by others.
Requirements
- 3–5 years of experience in data engineering or a closely related data infrastructure role.
- Proven experience designing and implementing scalable data pipelines and warehouse architectures.
- Strong expertise in Google Cloud Platform (BigQuery, Cloud Storage, Cloud Composer, Pub/Sub, Dataflow).
- Hands-on experience with dbt (data build tool) — models, tests, macros, sources, and documentation — at production scale.
- Experience building and maintaining data pipelines with Apache Airflow or a comparable workflow orchestration tool.
- Strong proficiency in SQL, including advanced BigQuery SQL (window functions, partitioning, clustering, query optimization).
- Proficiency in Python for data engineering tasks, including API integrations, data processing scripts, and custom operators.
- Familiarity with data modeling concepts: star schema, dimensional modeling, slowly changing dimensions (SCD).
- Experience with version control (Git) and collaborative development workflows (pull requests, code review).
- Understanding of data quality, lineage, and observability best practices.
- Startup or growth-stage mindset — comfortable with ambiguity, rapid iteration, and evolving priorities.
- Excellent communication skills, with the ability to collaborate effectively across technical and non-technical teams.
Benefits
- Annual salary + bonus
- A remote first culture!
- Flex PTO
- Health, Dental and Vision Insurance
- 13 Paid Holidays
- Company volunteer days
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
data engineeringdata pipelinesdata warehouse architecturedbtApache AirflowSQLBigQuery SQLPythondata modelingdata quality
Soft Skills
communicationcollaborationproblem-solvingadaptabilitydocumentation