Data Engineering Specialist

Experian

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇧🇷 Brazil

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowApacheAWSAzureCloudETLGoogle Cloud PlatformPythonScalaSpark

About the role

  • Develop and maintain scalable data pipelines using Scala and Apache Spark, with a focus on performance and reusability.
  • Design and evolve the medallion architecture (bronze, silver, gold), ensuring data governance, traceability, and quality.
  • Implement data ingestion, transformation, and delivery solutions in cloud and lakehouse environments.
  • Define and enforce data engineering best practices, including versioning, automated testing, and CI/CD.
  • Support technical and architectural decisions with engineering and product teams.
  • Work in multidisciplinary squads, collaborating with engineers, analysts, and data scientists.
  • Participate in agile ceremonies and contribute to the continuous improvement of processes and tools.
  • Monitor and optimize production pipelines, ensuring reliability and efficiency.

Requirements

  • Strong experience with Python, Scala, and Apache Spark in distributed environments.
  • Proven knowledge of medallion architecture and data lake / lakehouse concepts.
  • Advanced knowledge of data modeling, ETL/ELT, and data pipelines.
  • Experience with orchestration tools such as Airflow or similar.
  • Experience with cloud environments (Azure, AWS, or GCP).
  • Engineering practices such as versioning, testing, and automation.
  • Language: English - Technical
Benefits
  • Health/medical plan
  • Retirement plan
  • Paid time off
  • Flexible work
  • Professional development

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
ScalaApache SparkPythondata modelingETLELTdata pipelinesorchestration toolsversioningautomated testing
Soft skills
collaborationcommunicationagile methodologiescontinuous improvementproblem-solving