Tech Stack
AzureETLPythonSparkSQLSSISUnity
About the role
- Assist in designing, building, and maintaining ETL/ELT data pipelines using Microsoft Fabric and Azure Databricks.
- Support migration and maintenance of SSIS packages from legacy systems.
- Implement medallion architecture (Bronze, Silver, Gold) for data lifecycle and quality.
- Create and manage notebooks (Fabric Notebooks, Databricks) for data transformation using Python, SQL, and Spark.
- Build curated datasets to support Power BI dashboards.
- Collaborate with data analysts and business stakeholders to deliver fit-for-purpose data assets.
- Apply data governance policies in line with Microsoft Purview or Unity Catalog.
- Support monitoring, logging, and CI/CD automation using Azure DevOps.
Requirements
- Bachelor's degree in computer science, Information Systems, or related field.
- 2–3 years of experience in data engineering or related roles.
- Proficiency in SQL, Python, Spark.
- Familiarity with LangGraph and RAG DB concepts.
- Hands-on experience with Microsoft Fabric and Power BI.
- Understanding of ETL/ELT pipelines and data warehousing concepts.
- Knowledge of CI/CD automation with Azure DevOps (nice to have).
- Familiarity with data governance tools (Microsoft Purview, Unity Catalog) (nice to have).
- Experience with SSIS package migration and maintenance (nice to have).
- On-site: Tuesday to Thursday (per manager’s discretion)
- Mandatory in-person meetings: All Hands, Enterprise Applications, DECAL All Staff
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
ETLELTdata pipelinesPythonSQLSparkdata transformationdata engineeringCI/CD automationdata warehousing
Soft skills
collaborationcommunication
Certifications
Bachelor's degree in computer scienceBachelor's degree in Information Systems