Tech Stack
AzureCloudETLPySparkPythonSQLSSIS
About the role
- Design, build, and maintain data pipelines and infrastructure supporting enterprise analytics and reporting
- Develop and maintain incremental data pipelines using SSIS, PySpark notebooks, and Python
- Implement and optimize ETL/ELT processes for Azure SQL databases and data lake environments
- Build meta-driven, scalable data ingestion frameworks supporting SCD Type 2 methodology
- Write and optimize T-SQL stored procedures, views, and dynamic SQL for data transformations
- Develop Python-based automation including Azure Functions, blob triggers, and API integrations
- Maintain data quality, perform root cause analysis, and resolve data pipeline issues
- Support CI/CD processes including code reviews, Git version control, and Azure DevOps workflows
- Monitor pipeline performance and implement optimization strategies
- Assist with data modeling, indexing strategies, and schema evolution
- Collaborate with business stakeholders and Analytics team to meet reporting requirements
- Participate in data platform modernization and improvement initiatives
Requirements
- Proficiency with SQL Server, T-SQL, and database concepts (indexing, query optimization)
- Experience with ETL/ELT tools and concepts (SSIS or similar preferred)
- Understanding of data warehousing, dimensional modeling, and SCD methodologies
- Familiarity with Azure cloud services (especially Azure SQL Database)
- Ability to troubleshoot data issues and perform data analysis
- Strong problem-solving skills and attention to detail
- Experience with version control (Git) and collaborative development
- Health insurance
- Retirement plans
- Paid time off
- Flexible work arrangements
- Professional development
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
SQL ServerT-SQLETLELTSSISPySparkPythonAzure SQL Databasedata modelingdata transformation
Soft skills
problem-solvingattention to detailcollaborationroot cause analysis