Runtalent

Data Architect – Specialist

Runtalent

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇧🇷 Brazil

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowAmazon RedshiftAWSAzureBigQueryCloudGoogle Cloud PlatformKafkaSparkTableau

About the role

  • Design and evolve C&A’s enterprise data architecture in a hybrid, multi-cloud environment (on-premises ↔ AWS ↔ Azure ↔ GCP).
  • Design integrations between legacy systems and cloud ingestion and processing layers.
  • Define integration standards between local data centers, on-premises systems and cloud solutions.
  • Ensure the architecture is resilient, secure, high-performing and cost-optimized.
  • Architect large-scale pipelines distributed across AWS, Azure and GCP.
  • Define and maintain the data lifecycle across Bronze, Silver and Gold layers.
  • Implement Data Governance, Data Quality and a unified catalog (Lake Formation, Glue Catalog, DataHub, Collibra or equivalent).
  • Collaborate with Product, Customer and Supplier areas to define domains and enterprise models.
  • Integrate analytical consumption and data science environments across clouds.
  • Build architectures for real-time and batch consumption.
  • Guide engineering, analytics and systems squads in adopting unified architecture patterns for the multi-cloud ecosystem.
  • Conduct POCs, including comparative evaluations between AWS, GCP and Azure when needed.
  • Support development of data extraction and ingestion processes.

Requirements

  • Strong experience with hybrid integration: on-premises, AWS, GCP and Azure.
  • Practical experience with corporate connectivity.
  • Experience designing standardized, interoperable multi-cloud architectures.
  • Deep knowledge of multi-cloud data pipelines: AWS (Kinesis, Glue, S3, Lake Formation, Redshift, EMR), GCP (BigQuery, Dataflow/Beam, Pub/Sub, Composer), Azure (Data Factory, Synapse, ADLS, Databricks, Event Hub).
  • Experience with ingestion patterns: CDC (Change Data Capture), streaming, batch, API ingestion, file ingestion.
  • Data modeling: conceptual, logical, physical, canonical models and corporate modeling standards.
  • Big Data and distributed processing (Spark, EMR, Databricks desirable).
  • Governance, metadata, data quality, catalog and lineage.
  • Security and IAM.
  • Orchestration tools: Step Functions, Airflow, MuleSoft.
  • Experience with DW/BI: Power BI (Required), Looker (Desired), Tableau (Desired), QuickSight (Desired).
  • Solid knowledge of messaging systems: Kafka, Pub/Sub, Event Hub, Kinesis.
  • Experience with BI and self-service analytics (Power BI, Tableau, Looker).
  • Experience with CI/CD and Infrastructure as Code (Terraform preferred), GitOps.
  • Cloud security experience.
  • Hands-on profile to support development of data extraction and ingestion.
Benefits
  • Laptop in excellent condition, configured to meet development needs:
  • Licensed Windows 10 or later operating system;
  • Licensed local Microsoft Office suite (not only web-based Office; must be locally installed);
  • Microsoft Project;
  • Antivirus with regularly updated signatures;
  • Processor: minimum i7;
  • RAM: minimum 16 GB;
  • Disk space: 500 GB;
  • Peripherals/accessories: mouse, keyboard, webcam, headset;

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data architecturemulti-cloud environmentsdata pipelinesdata modelingbig data processingdata governancedata qualityCI/CDInfrastructure as Codestreaming ingestion
Soft skills
collaborationguidancecommunicationproblem-solvinganalytical thinking