CI&T

Data & AI Manager – Master

CI&T

full-time

Posted on:

Location Type: Remote

Location: Brazil

Visit company website

Explore more

AI Apply
Apply

About the role

  • Lead and develop the team, promoting engineering best practices and a Data-as-a-Product culture (domain ownership, SLOs/SLIs).
  • Design and implement complex data pipelines (batch and streaming), with automation, orchestration, and ETL/ELT optimization.
  • Develop transformations in Python/PySpark and SQL, with automated tests (PyTest) and engineering standards.
  • Ensure end-to-end Data Quality: tests-as-code, dataset SLOs/SLIs, proactive alerting, and remediation automation.
  • Design the platform blueprint: metadata-first, lineage, policy-as-code, data catalog/discovery, semantic layer, and observability.
  • Incorporate GenAI into workflows: copilots for catalog and documentation, sensitive data classification (PII), assisted generation of policies and tests, and change intelligence (impact summaries).
  • Collaborate with multidisciplinary teams (data, product, security, compliance, business), translating business requirements into scalable, measurable technical solutions.
  • Drive architectural decisions (Lakehouse, Delta/Parquet, real-time with Pub/Sub/Kafka, microservices) with a focus on scalability, cost, and security.
  • Operate always-on compliance mechanisms (LGPD/PII): classification, masking, contextual access control, and end-to-end traceability.
  • Integrate the platform with the corporate ecosystem (APIs, events, legacy systems/SaaS), ensuring performance and reliability.
  • Deploy versioning, CI/CD, and IaC (Git/Azure DevOps) for reproducibility and reduced time-to-data.
  • Interact with senior management, present roadmaps, risks and results; define and track OKRs/metrics (adoption, lead time, incidents, residual risk).
  • Evangelize and manage change to increase adoption and business value.

Requirements

  • Strong experience building data pipelines with Python, PyTest, PySpark, and SQL.
  • Solid experience in Google Cloud Platform (BigQuery, Dataproc, Dataflow, Pub/Sub, Composer/Airflow, IAM).
  • Strong experience with orchestration tools (Airflow/Composer, Dagster, Prefect).
  • Experience with Databricks on complex pipelines (Delta Lake, Jobs/Workflows, Unity Catalog).
  • Experience with relational and non-relational databases, including schema design and query optimization.
  • Experience in microservices architecture and enterprise integrations (REST/GraphQL, event-driven).
  • Proficiency with Git/Azure DevOps for versioning, CI/CD, and collaboration.
  • Hands-on experience in operational Data Governance: metadata, lineage, data catalog/discovery, data quality, security & access.
  • Experience leading technical teams and engaging with executives, with strong communication and influencing skills.
  • Solid understanding of security, privacy and compliance (LGPD) applied to data.
  • Ability to make architectural decisions focused on cost/efficiency (FinOps), scalability, and reliability.
Benefits
  • Health and dental insurance;
  • Food and meal vouchers;
  • Childcare assistance;
  • Extended parental leave;
  • Partnerships with gyms and health & wellness professionals via Wellhub (Gympass) TotalPass;
  • Profit Sharing (PLR);
  • Life insurance;
  • Continuous learning platform (CI&T University);
  • Discount club;
  • Free online platform dedicated to promoting physical, mental health and well-being;
  • Prenatal and responsible parenthood course;
  • Partnerships with online course platforms;
  • Language learning platform;
  • And many more

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
PythonPySparkSQLETLELTData GovernanceData QualityMicroservicesData PipelinesArchitectural Decisions
Soft skills
LeadershipCommunicationCollaborationInfluencingChange ManagementProblem SolvingTechnical Team ManagementStakeholder EngagementOKR TrackingRisk Management