
Senior Data Engineer – Azure, Databricks
Deroyque IT
full-time
Posted on:
Location Type: Remote
Location: Brasil
Visit company websiteExplore more
Job Level
About the role
- Architecture and Development: Develop modular, scalable data pipelines (ingestion, processing, and cleansing) using PySpark and/or Scala.
- Modern Data Stack: Implement and maintain Lakehouse architectures following the Medallion pattern (Bronze, Silver, Gold).
- CRM/CDP Strategy: Work on integrating and structuring data for CDP/CRM platforms, focusing on a 360-degree customer view.
- Quality and Automation: Create automated sanity checks and ensure data integrity in QA and Production environments.
- Optimization: Monitor and fine-tune processing performance and costs in Databricks and SQL queries.
Requirements
- Databricks expertise: Advanced hands-on experience with the platform.
- Azure ecosystem: Strong experience with Data Factory, Synapse, and Data Lake Storage.
- Data engineering best practices: Proficiency in PySpark and advanced SQL.
- Data modeling: Deep experience in conceptual, logical, and physical modeling.
- Storage frameworks: Experience with Delta Lake and modern data architectures.
- Advanced English: Ability to read technical documentation and communicate fluently in a professional setting.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
PySparkScalaSQLdata modelingdata engineering best practicesautomated sanity checksdata integrityperformance optimizationLakehouse architectureMedallion pattern
Soft Skills
advanced English communication