Collaborate with clients to understand business objectives and technical requirements and translate them into data solutions
Lead end-to-end delivery of data initiatives: design, build, and optimize scalable data pipelines, data lakes and architectures using Databricks, Microsoft Fabric, Snowflake, Azure Data Services, etc.
Lead engineering for real-time and ML/AI use cases, including data/feature pipelines, Kafka/Kinesis, vector DB integrations, and integrations with LLMs/AI services (AWS Bedrock, Azure OpenAI, Vertex AI)
Lead and mentor a small technical team to ensure successful delivery and development
Contribute to proposals, solutioning, and client conversations with technical expertise
Manage Agile/Scrum project delivery from backlog management to sprint execution
Work closely with cross-functional teams (data engineers, analysts, business stakeholders) to deliver cohesive solutions
Share knowledge, shape best practices in the data practice, and recommend improvements based on emerging technologies
Requirements
Bachelor’s in computer science, IT, or related field
5–7 years of experience in data engineering or cloud data roles
Hands-on experience with modern data platforms and building/optimizing data pipelines
Strong expertise in at least two of: Databricks, AWS, Snowflake, or Azure (Data Factory, Synapse, ADLS)
Experience with Databricks and Microsoft Fabric
Experience in real-time and ML/AI use cases including data/feature pipelines
Experience with Kafka or Kinesis and vector database integrations
Experience integrating with LLMs and AI services (AWS Bedrock, Azure OpenAI, Vertex AI)
Experience in client-facing roles, pre-sales, and proposal contributions
Experience managing projects or leading technical teams using Agile/Scrum methodologies
Excellent communication skills, able to explain technical concepts to technical and non-technical audiences