CI&T

Senior Data Engineer

CI&T

full-time

Posted on:

Origin:  • 🇵🇭 Philippines

Visit company website
AI Apply
Apply

Salary

💰 ₱120,000 - ₱140,000 per month

Job Level

Senior

Tech Stack

ApacheAWSAzureCloudGoogle Cloud PlatformPythonSparkSQLTerraformVault

About the role

  • Design, build, and maintain scalable data pipelines using Microsoft Fabric, Azure Data Factory, and related Azure services to support ingestion, transformation, and curation of enterprise data.
  • Collaborate with architects, analysts, data modellers, and business stakeholders to deliver high-quality, fit-for-purpose data solutions.
  • Participate in solution design and technical discussions to ensure robust, scalable, and maintainable data architectures.
  • Implement performance-optimised and secure data workflows, ensuring compliance with relevant governance and quality standards.
  • Create and maintain clear, accurate, and up-to-date technical documentation for developed data assets, pipelines, and processes.
  • Engage effectively with technical and non-technical team members to ensure shared understanding of requirements, solution approaches, and delivery expectations.
  • Translate business and analytical requirements into actionable technical solutions by leveraging appropriate Microsoft Fabric components and Azure services.
  • Ensure a stable and productive development environment, following CI/CD best practices, monitoring pipeline performance, and addressing issues promptly.

Requirements

  • Proven experience of more than 6+ in data engineering, data warehousing, and analytics projects
  • Strong knowledge of modern data file formats such as Delta Tables, Apache Iceberg, and Parquet
  • Proficiency in Python, Spark, Azure Data Factory, and other relevant data transformation tools and frameworks
  • Expertise in data modelling across conceptual, logical, and physical layers, with knowledge of Dimensional Modelling and Data Vault 2.0 methodologies
  • Experience with cloud data services, preferably within the Microsoft Intelligent Data Platform ecosystem, with exposure to AWS or GCP data services as an advantage
  • Hands-on experience with cloud data platforms such as Microsoft Fabric, Databricks, and/or Snowflake
  • Skilled in data pipeline orchestration using Azure Data Factory and related orchestration frameworks
  • Experience with CI/CD pipelines and familiarity with DevOps practices to enable automated deployment and integration workflows
  • Proficiency in Infrastructure as Code (IaC) tools such as Terraform or Bicep for environment provisioning and configuration
  • Solid understanding of data governance, data management practices, and security considerations for cloud data platforms
  • Competence in database performance tuning, optimisation, and advanced SQL development
  • Familiarity with design patterns, clean architecture, and clean coding principles for maintainable and scalable data solutions
  • Knowledge of unit, integration, and end-to-end testing in the context of data engineering
  • Mentorship ability
  • Consulting mindset
  • Strong communication skills
  • In-depth understanding of non-functional requirements including performance, security, data privacy, and protection
  • Passion for Data and AI Enthusiast