Salary
💰 $112,455 - $170,100 per year
Tech Stack
CloudETLIoTPythonSparkSQL
About the role
- Own and operate our Databricks data store, ensuring performance, scalability, and reliability.
- Build and manage a visualization store and AI store that empower downstream analytics, machine learning, and AI applications.
- Design and optimize bulk GenAI data pipelines in Databricks to support generative AI applications at scale.
- Partner with AI engineers and data scientists to enable experimentation, model training, and production-grade deployments.
- Develop frameworks for data ingestion, transformation, governance, and monitoring across CRM, sales, and revenue systems.
- Collaborate with cross-functional teams to ensure data infrastructure meets both technical and business needs.
Requirements
- 5+ years of industry experience in data engineering, with significant experience building large-scale data platforms.
- Deep expertise in Databricks, Spark, and modern data lakehouse architectures.
- Proficiency in Python and SQL, with experience in designing robust ETL/ELT pipelines.
- Experience orchestrating data workflows at scale and enabling machine learning or AI use cases.
- Strong understanding of data modeling, performance optimization, and cost-efficient infrastructure design.
- Located in and authorized to work in the United States (this is a fully remote role).