Own and maintain our data pipeline architectures (e.g., critical data ingestion services, ETL pipelines, database mirroring and warehousing), ensuring they are reliable, monitored, and meet SLAs.
Manage and evolve our data modeling environments and provide a smooth, well-documented workflow for analysts and engineers.
Operate and improve our orchestration systems (Dagster), ensuring jobs run reliably and are observable.
Evaluate and rationalize data tooling from Databricks and notebooks (Marimo, Jupyter) to BI/analytics platforms (Redash and alternatives) and guide Voltus toward a sustainable, coherent data platform.
Implement observability for data systems (logging, alerting, metrics) so issues are detected early and data quality is continuously monitored.
Champion data governance and documentation, making datasets well-defined, trustworthy, and easy to navigate.
Collaborate with analysts, data scientists, and platform engineers to ensure the infrastructure you build is intuitive, scalable, and solves real-world problems.
Lay the groundwork for advanced applications by making Voltus’ data reliably accessible via well-documented interfaces, positioning us to adapt to future ML and AI use cases.
Requirements
Proven experience in a data engineering or infrastructure role, with responsibility for production-grade pipelines and data systems.
Skilled in a programming language such as Python (Bonus Go).
Deep experience with ETL/ELT pipelines, dbt, and integrating disparate data sources into warehouses/lakes.
Familiarity with cloud data platforms (AWS, GCP) and modern data tooling. We are running on AWS
Experienced in workflow orchestration (Airflow, Dagster, or similar).
Comfortable evaluating tradeoffs across notebook and analysis platforms (Jupyter, Marimo, Databricks) and recommending sustainable solutions.
Knowledge of BI/analytics tools (Redash, Looker, Mode, Superset, etc.) and how to support or migrate to them.
Strong understanding of data quality, governance, and observability.
A clear communicator who can work across technical and non-technical teams to define requirements and deliver solutions.
Comfortable taking ownership end-to-end of critical data infrastructure and serving as a point person for reliability and direction.
Familiarity with observability/monitoring tools (e.g., Datadog, Prometheus).
Benefits
Base pay is $160,000-190,000 annually, commensurate with experience, with 10% bonus paid semi-annually, and equity.
Unlimited leave for full-time employees
Parental leave
Comprehensive benefits package to promote health, wellness, and financial security.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data pipeline architectureETL pipelinesdata modelingworkflow orchestrationPythondbtdata qualitydata governanceobservabilitycloud data platforms