Design and implement scalable data ingestion and transformation frameworks using Azure services to process structured, semi-structured, and unstructured data.
Build and maintain robust ETL/ELT pipelines using Azure Data Factory and Azure Databricks.
Integrate data from on-premises systems, cloud storage, APIs, and streaming platforms.
Develop and optimize notebooks and workflows in Azure Databricks using PySpark and SQL.
Implement Delta Lake for efficient data storage, versioning, and ACID transactions; leverage Unity Catalog and job orchestration.
Design and implement data models (star/snowflake schemas) for analytics and reporting; collaborate on data lakehouse architecture and Medallion Architecture (Bronze/Silver/Gold).
Implement data validation, profiling, and cleansing routines; ensure data lineage and metadata management for governance.
Monitor and optimize performance of Spark jobs and data pipelines; troubleshoot data latency, job failures, and resource utilization.
Work closely with data scientists, analysts, and business units to translate business needs into scalable technical solutions.
Implement role-based access control (RBAC), encryption, and secure data handling practices; ensure compliance with industry regulations (e.g., NERC CIP, GDPR, HIPAA if applicable).
Maintain documentation of data flows, architecture, and operational procedures; promote code versioning, testing, and CI/CD best practices.
Requirements
Bachelor's degree in information systems, computer science or related technical field or equivalent work experience.
If no bachelor's degree, typically four years of additional related, progressive work experience is needed.
Level 1: Minimum of two years additional direct related technical experience in IT operations or data engineering.
Level 2/3: Minimum of three or more years additional direct related technical experience in data engineering, data integration, or database administration.
Senior: Six or more years of experience with advanced knowledge of data architecture, cloud platforms (especially Azure), and enterprise data solutions.
Proficiency in Azure Data Factory and Azure Databricks.
Hands-on experience in Azure data services and pipeline development.
Strong understanding of data modeling, ETL/ELT processes, and performance tuning of enterprise-level applications.
Expert-level knowledge of data-related technologies from architecture to administration, including design, development, optimization, and licensing.
Proven experience working in the utility industry.
Effective oral and written communication skills, collaboration, and ability to mentor junior engineers.
Strong analytical and problem-solving abilities.
Ability to prioritize and manage multiple tasks and projects concurrently.
Employees must be able to perform the essential functions of the position, with or without an accommodation.
ATS Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data ingestiondata transformationETLELTdata modelingperformance tuningdata validationdata profilingdata cleansingdata architecture