Design, build, and maintain end-to-end analytics ETL/ELT pipelines using Microsoft Fabric components, Azure Data Factory, and Synapse Data Engineering
Create performant ETL/ELT workflows with PySpark and SQL and own production pipelines
Manage telemetry and time-series data in Azure Data Explorer (ADX) using KQL; define ingestion patterns, retention and hot/cold storage strategies
Implement storage and file-layout best practices (Parquet/Delta, partitioning, compaction) in OneLake for high-throughput time-series data
Own data modeling and transformations across Spark (PySpark) and SQL, producing performant schemas and semantic models for analytics
Apply CI/CD and infrastructure-as-code, automate testing and deployments, and use Git-based workflows for repeatable releases
Establish observability and monitoring: configure metrics, alerts, capacity planning and cost-monitoring for pipeline reliability
Ensure data quality, lineage, security and compliance: implement validation checks, access controls, documentation and collaborate with data governance
Collaborate with product owners, analysts and BI developers to translate business requirements into data solutions and analytics-ready datasets
Mentor and guide junior engineers, contribute to team best practices, and drive architectural and platform improvements
Requirements
3–6 years of professional data engineering experience, including at least 2 years hands‑on with Microsoft Fabric Data Engineering (pipelines, notebooks, lakehouse/warehouse, OneLake)
Advanced, hands‑on experience with Microsoft Fabric for data engineering workloads and OneLake integration
Proven ability to design, build and maintain end‑to‑end ETL/ELT pipelines using Fabric, Azure Data Factory and/or Synapse Data Engineering
Strong Spark experience (PySpark and/or Scala) and practical familiarity with Delta/Parquet formats for large‑scale transformations and storage
Advanced SQL/T‑SQL development skills with demonstrated performance tuning and optimization for multi‑GB/TB datasets
Advanced experience with Azure Data Explorer (Kusto/ADX) and Kusto Query Language (KQL), including complex queries, aggregations, time‑series analysis and query performance tuning
Practical experience with ADX ingestion and telemetry patterns (e.g., queued/batched ingestion), retention strategies and use of materialized views for gold‑layer outputs
Knowledge of storage partitioning and file layout best practices for time‑series data (partitioning, compaction and small‑file handling) when using Parquet/Delta in OneLake
Familiarity with Synapse Data Engineering, Azure Data Factory, and related Fabric runtimes (lakehouses and data warehouses)
Practical experience integrating data outputs with BI tools and semantic models; Power BI integration experience is beneficial
Experience applying development best practices: Git, CI/CD for data pipelines, infrastructure‑as‑code patterns, automated testing and deployment for data artifacts
Skills in monitoring and operating production pipelines—observability, alerting, incident response, capacity planning and cost optimization
Strong English communication skills and experience working in Agile teams
Demonstrated ability to mentor junior engineers and influence architectural and technical improvements across Fabric/Azure data platforms
Working familiarity with adjacent Azure services (Azure Data Lake, Databricks, Logic Apps) and understanding of data governance/catalog concepts (OneLake/Purview)
Nice to have: exposure to Azure Event Hubs and the Data Collector API, experience designing performance benchmarking and load‑testing, familiarity with data‑testing practices and frameworks, Microsoft/Azure certifications (desirable)
Benefits
Join a powerful tech workforce and help us change the world through technology.
Professional development opportunities with international customers.
Collaborative work environment.
Career path and mentorship programs that will lead to new levels.
Inclusive, equal-opportunity employer committed to creating an inclusive environment for all employees.
Remote work (LATAM)
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
ETLELTPySparkSQLKQLAzure Data FactoryMicrosoft FabricData EngineeringDeltaParquet