Habitat Health

Healthcare Data Engineer

Habitat Health

full-time

Posted on:

Location Type: Remote

Location: CaliforniaUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $135,000 - $155,000 per year

About the role

  • Support the development and maintenance of secure, scalable data systems that enable clinical, operational, and financial analytics in a HIPAA‑regulated environment.
  • Contribute to building and operating data pipelines that ingest, transform, and standardize data from internal and external healthcare sources such as EHRs, claims vendors, and SDOH APIs.
  • Work closely with analysts and business stakeholders to understand requirements and help create reliable, analytics‑ready datasets and basic data models.
  • Assist in documenting data processes and governance practices, including data lineage, access controls, and handling procedures for sensitive health information.
  • Participate in data quality efforts by implementing validation checks, monitoring data flows, and helping identify anomalies that could impact downstream analytics.
  • Help develop and maintain reusable dbt models and shared data transformations, with exposure to open healthcare data models like Tuva, OMOP, or FHIR.
  • Collaborate with senior engineers to improve infrastructure and team processes, taking on increasing responsibility as skills grow and the data engineering function expands.

Requirements

  • Bachelor's degree in Computer Science, Software Engineering, Data Science, Statistics, or related technical field.
  • Experience with data engineering concepts through internships or 3+ years in a related technical role.
  • Familiarity with at least one cloud platform (Azure preferred) and a willingness to learn cloud‑based data tools and services.
  • Understanding of data architecture principles and interest in learning how to design scalable, secure, and cost‑efficient systems.
  • Exposure to building or maintaining data pipelines using tools like Airflow or Prefect.
  • Ability to work with structured and unstructured data sources, including files, APIs, and common data transfer methods.
  • Solid foundational Python and SQL skills, with the ability to write clean, maintainable code and eagerness to grow into more advanced development.
  • Ability to leverage (not rely on) AI coding tools like Claude Code/Github Copilot/Cursor.
  • Awareness of data governance, access controls, and security best practices.
  • Comfortable working independently on well‑defined tasks.
  • Alignment with our mission and values, and enthusiasm for contributing positively to the team’s culture and daily practices.
Benefits
  • medical/dental/vision insurance
  • short and long-term disability
  • life insurance
  • flexible spending accounts
  • 401(k) savings
  • paid time off
  • company-paid holidays
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringdata pipelinesPythonSQLdbtdata architecturedata modelingdata validationdata transformationdata quality
Soft Skills
collaborationcommunicationproblem-solvingindependenceadaptabilityattention to detailanalytical thinkingorganizational skillsteamworkenthusiasm