
Data Platform Engineer
Lola Blankets
full-time
Posted on:
Location Type: Remote
Location: United States
Visit company websiteExplore more
About the role
- Own our data ingestion layer end-to-end, including completing our migration to open-source ingestion tooling (dlt) and maintaining reliability as the stack evolves
- Manage dbt models, tests, documentation, and the semantic layer - the definitions that determine what every metric means across the business
- Own Dagster orchestration: scheduling, retries, alerting, and failure handling across all pipeline runs
- Keep Lightdash metadata, dimension/measure definitions, and access controls accurate and current
- Accelerate data refresh cycles to support near-real-time operational use across the business
- Build monitoring, failure alerting, and anomaly detection into the stack so issues surface proactively
- Chase data through systems when things go wrong: trace why records drop or transform unexpectedly between source and dashboard, and resolve the root cause rather than the symptom
- Establish and document data quality standards and lineage practices across the warehouse
- Partner with our Technology and Engineering Lead on platform infrastructure, system integrations, and technical initiatives where data is a core component
- Build and maintain reverse ETL pipelines to push warehouse data back into operational tools
- Support real-time event pipeline development as new data sources and product surfaces come online
- Contribute to A/B testing infrastructure and the systems that support consistent metric definitions across the org
- Own separation of dev and production environments: deployment pipelines, change management, access controls, and release practices
- Run a PII audit across the stack and implement data warehouse governance standards
- Maintain infrastructure documentation and ensure the platform is operable beyond any single person
- Continuously evaluate our platform stack to ensure we're using the right tools - favoring open-source, cost-effective, and maintainable solutions
Requirements
- 3+ years of data engineering or data platform experience - you've owned production pipelines, not just built them in a sandbox
- Strong dbt skills: models, tests, sources, exposures, and the semantic layer
- Solid Snowflake or equivalent cloud warehouse experience (MotherDuck is where we are likely to land shortly)
- Hands-on with a modern orchestration tool (Dagster, Airflow, Prefect, or similar)
- Strong Python or Typescript plus SQL - enough to read, debug, and write anything in the stack
- DevOps experience: you think in terms of environments, deployments, change control, and what happens when things break in production
- Open-source bias - you'd rather build and own something than pay for a managed tool that abstracts away control
- Comfortable with GenAI-assisted development: using LLMs as part of your development workflow to move faster and write better code
- Comfortable debugging data end-to-end - you can trace a wrong number back through the semantic layer, dbt models, and ingestion pipeline to the source
- Works across team boundaries comfortably; this role sits between data and engineering and requires interfacing with leaders from both teams
- Works well independently in a lean team with minimal process overhead
- Experience in DTC, ecommerce, or a fast-moving consumer business a plus
Benefits
- 21 days paid vacation + all federal holidays
- Full health benefits
- 16 weeks of paid birth parent leave available. 8 weeks non-birth parent leave
- 55% off Lola Blankets for friends and family
- Opportunities for career growth and leadership roles within Lola Blankets
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
data engineeringdbtSnowflakePythonTypescriptSQLDevOpsdata quality standardsreverse ETLA/B testing
Soft Skills
cross-team collaborationindependent workproblem-solvingcommunicationchange managementdebuggingadaptabilityownershiptechnical initiativedocumentation