TMS

Databricks Architect – 15 years exp.

TMS

contract

Posted on:

Location Type: Remote

Location: MissouriUnited States

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Architect and implement Databricks Lakehouse solutions for large-scale data platforms
  • Design and optimize batch & streaming data pipelines using Apache Spark (PySpark/SQL)
  • Implement Delta Lake best practices (ACID, schema enforcement, time travel, performance tuning)
  • Build and manage Databricks jobs, workflows, notebooks, and clusters
  • Enable data governance using Unity Catalog (access control, lineage)
  • Integrate Databricks with cloud data services (ADLS / S3, ADF, Synapse, etc.)
  • Support analytics, BI, and AI/ML workloads (MLflow exposure is a plus)
  • Lead solution design discussions and mentor data engineering teams

Requirements

  • 14-15 years of total experience
  • 10+ years in data engineering / data architecture
  • 5+ years of strong hands-on experience with Databricks
  • Expert in Apache Spark, PySpark, SQL
  • Strong experience with Delta Lake & Lakehouse architecture
  • Cloud experience on Azure Databricks / AWS Databricks
  • Proven experience in designing high-volume, scalable data pipelines
  • Good-to-Have: Unity Catalog, MLflow, Databricks Workflows, Streaming experience (Kafka / Event Hubs), CI/CD for Databricks (Azure DevOps / GitHub)
Benefits
  • 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
DatabricksApache SparkPySparkSQLDelta LakeLakehouse architecturedata pipelinesMLflowCI/CDstreaming data
Soft Skills
leadershipmentoringsolution designcommunication