SpendMend

Senior DataOps Engineer

SpendMend

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Job Level

Senior

Tech Stack

ApacheAzureJavaScriptKafkaPythonScikit-LearnSparkSQLUnity

About the role

  • SpendMend partners with hospitals, health systems, and higher education institutions to improve financial performance and patient care.
  • Senior DataOps Engineer responsible for building and operating a modern lakehouse infrastructure (Databricks/Delta Lake/dbt).
  • Construct high-performance ingestion pipelines and model curated data marts and semantic layers for analytics and ML.
  • Tune Spark-based systems for optimal cost and performance and implement schema governance and data contracts.
  • Define and monitor SLIs/SLOs, integrate quality checks in CI/CD, establish lineage, automated backfills, and runbooks.
  • Enable telemetry for business outcomes and ROP reporting; lead incident triage, resolution, and postmortem documentation.
  • Collaborate with Platform, SRE, Security, GRC, Analytics Engineering, MLOps, product owners, and stakeholders.
  • Job Location: Remote (U.S. Time Zones) or Hybrid (Grand Rapids, MI); travel expected at least once per year.
  • Note: Company cannot sponsor work visas; reasonable accommodations available during recruitment.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent work experience.
  • 5+ years of hands-on experience building data platforms and pipelines at scale.
  • Strong SQL and Python programming proficiency.
  • Expertise with Databricks, Apache Spark, and dbt.
  • Proficiency in Git-based CI/CD pipelines (GitHub Actions or Azure DevOps).
  • Experience managing production incidents and authoring technical postmortems.
  • Ability to translate business KPIs into telemetry and reporting systems.
  • Deep understanding of ADLS Gen2 architecture, performance tuning, and security.
  • Hands-on experience with Azure identity/access management (Microsoft Entra ID, RBAC, Managed Identities, Service Principals).
  • Familiarity with Azure Monitor, Log Analytics, alerts, and action groups.
  • Experience with LLM application stacks (e.g., LangChain, OpenAI API) (preferred).
  • Proficiency in scikit-learn for feature engineering and predictive modeling (preferred).
  • Experience with event-based streaming using Kafka or Azure Event Hubs (preferred).
  • Familiarity with web scraping frameworks and ethical data collection practices (preferred).