
Azure Databricks Data Engineer
OZ
full-time
Posted on:
Location Type: Remote
Location: Argentina
Visit company websiteExplore more
About the role
- Design and implement end-to-end data solutions on the Azure platform, including data ingestion, data processing, data storage, and data visualization.
- Develop and maintain data pipelines using Azure Data Factory, Azure Databricks, Azure Data Lake Storage, and other relevant tools and technologies.
- Collaborate with data architects and data scientists to understand data requirements and design scalable and optimized data models and schemas.
- Implement data integration solutions to extract, transform, and load (ETL) data from various sources into Azure data platforms.
- Ensure the reliability, availability, and performance of data solutions by monitoring and optimizing data pipelines and storage systems.
- Troubleshoot and resolve data-related issues, including data quality, performance, and security concerns.
- Collaborate with cross-functional teams to gather business requirements and translate them into technical solutions.
- Stay updated with the latest trends and advancements in Azure data technologies and provide recommendations for adopting new tools and techniques.
- Perform data profiling, data validation, and data cleansing activities to ensure data accuracy and consistency.
- Document technical specifications, data flows, and processes for reference and knowledge sharing.
Requirements
- 2+ years of proven work experience with Azure data integration services, Data Modeling, and Data Architecture.
- Proven experience as a Data Engineer with a focus on Azure cloud technologies.
- Strong knowledge of Azure data services, including Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Azure SQL Database, and Azure Synapse Analytics.
- Proficient in programming languages such as Python, SQL, and PowerShell for data manipulation and automation.
- Experience with data modeling and designing efficient data structures for analytics and reporting purposes.
- Solid understanding of data integration techniques, including ETL processes and data transformation.
- Familiarity with big data technologies like Apache Spark and Hadoop is a plus.
- Strong problem-solving skills and the ability to debug and resolve complex data issues.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
data integrationdata modelingdata architectureETLdata manipulationdata processingdata storagedata visualizationdata profilingdata cleansing
Soft Skills
problem-solvingcommunicationcollaboration