Build scalable data solutions on the Microsoft Azure ecosystem (Microsoft Fabric, Azure Databricks)
Requirement gathering and analysis
Design and implement data pipelines using Microsoft Fabric & Databricks
Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage
Implement data security and governance measures
Monitor and optimize data pipelines for performance and efficiency
Troubleshoot and resolve data engineering issues
Provide optimized solution for any problem related to data engineering
Work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC
Develop and maintain Delta tables and Databricks best practices
Requirements
5–8 years of experience in Data Engineering or related roles
Hands-on experience in Microsoft Fabric
Hands-on experience in Azure Databricks
Proficiency in PySpark for data processing and scripting
Strong command over Python & SQL – writing complex queries, performance tuning, etc.
Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas)
Experience with different databases like Synapse, SQL DB, Snowflake
Hands on experience in performance tuning & optimization on Databricks & MS Fabric
Understanding CI/CD practices in a data engineering context
Excellent problem-solving and communication skills
Exposure to BI tools like Power BI, Tableau, or Looker
Education: B.E.(B.Tech)/M.E/M.Tech (Bachelor/Master of Engineering)
Mandatory skill sets: Microsoft Fabrics/Azure/Data Engineer
Preferred: Azure DevOps, Scala or other distributed processing frameworks, familiarity with data security and compliance in the cloud, experience leading a development team
Benefits
inclusive benefits
flexibility programmes
mentorship
support your wellbeing
ATS Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringMicrosoft FabricAzure DatabricksPySparkPythonSQLETLDelta tablesperformance tuningdata governance