Design, build, and optimize scalable data pipelines using **Azure Data Factory**, **DBT**, and **Databricks**.
Develop modular, testable DBT models for data transformation and analytics.
Implement data orchestration and workflow automation in Azure environments.
Ensure high performance, reliability, and scalability of data workflows.
Build and maintain Snowflake-based data architectures and pipelines.
Leverage Snowflake Cortex functions for Anomaly Detection, Time-Series Forecasting, Classification, Text Completion, Embedding, Sentiment Analysis, Summarization.
Integrate Snowflake Cortex with UI tools like **Copilot**, **Universal Search**, and **Document AI**.
Implement robust data validation frameworks using Snowflake Cortex anomaly detection and Custom logic via Snowpark UDFs.
Collaborate with analytics teams to deliver clean, trusted datasets.
Requirements
12+ years of experience in data engineering, with a strong focus on **Azure Data Factory**, **DBT**, and **Databricks**.
Deep expertise in **Snowflake** and Snowpark APIs.
Experience with Snowflake Cortex ML and LLM functions.
Strong proficiency in **Python** and **SQL**.
Familiarity with open-source LLMs and GenAI integration within Snowflake.
Excellent understanding of data modeling, transformation, and pipeline optimization.
Strong collaboration skills across engineering, analytics, and business teams.
Benefits
EXL never requires or asks for fees/payments or credit card or bank details during any phase of the recruitment or hiring process and has not authorized any agencies or partners to collect any fee or payment from prospective candidates. EXL will only extend a job offer after a candidate has gone through a formal interview process with members of EXL’s Human Resources team, as well as our hiring managers.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
Azure Data FactoryDBTDatabricksSnowflakeSnowparkPythonSQLdata modelingdata transformationpipeline optimization