Trustly

DataOps Engineer, Data Tools

Trustly

full-time

Posted on:

Origin:  • 🇧🇷 Brazil

Visit company website
AI Apply
Manual Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowAmazon RedshiftAWSDockerEC2KafkaKubernetesPostgresPythonSparkSQLTerraform

About the role

  • At Trustly: mission to deliver a better way to pay and get paid; making Pay by Bank the new standard at checkout; connecting merchants and consumers globally.
  • Work-from-anywhere policy allows employees in Brazil, the U.S., and Canada to work remotely within their country of residence; remote-first culture in Brazil.
  • About the team: The DataOps team ensures data from applications, APIs, and tools is delivered in a secure, scalable, and structured way across multiple environments; works with batch (Airflow) and streaming (Kafka) pipelines; maintains platforms like Redshift and QuickSight; enables analytics and supports Data Science.
  • Design, implement, and maintain the scalable data platform infrastructure (AWS, Kubernetes, EMR, Redshift, Glue, etc.).
  • Manage and evolve client-faced tools and frameworks for data processing (e.g., QuickSight, Redshift Editor, Athena IDE, Metabase).
  • Build and maintain secure, automated CI/CD pipelines for data components and infrastructure-as-code.
  • Collaborate with DevOps and Security teams to ensure compliance, reliability, and scalability of the data platform.
  • Provide development environments, standardized workflows, and tooling (e.g., Sagemaker Studio, Athena IDE, Redshift Editor, etc.) to improve developer experience.
  • Support version control, release workflows, and automation for data transformation, jobs, data workloads and integrations used by data producers and consumers.
  • Ensure high availability and performance of data tools through observability and alerting integrations.
  • Implement data quality and validation checks (e.g., dbt tests, unit tests).
  • Contribute to maintaining data catalogs and documentation for lineage and governance.
  • Investigate and resolve issues in data pipelines, workloads, and integrations to ensure SLAs are met.
  • Documentation of data flows, architectural setup and data model.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, IT, or related technical field.
  • Successful history of building big data pipelines, and orchestrating data workloads
  • Experience with AWS services (EKS, EC2, EMR, RDS) and big data tools (Spark, Redshift)
  • Experience with relational databases (preferably Postgres), strong SQL coding, data modeling, data warehouses
  • Hands-on experience with Infrastructure as Code (Terraform), Kubernetes, Docker, and CI/CD tools.
  • Solid background in automation and workflow orchestration (e.g., Airflow, Sagemaker).
  • Python programming skills
  • Proactivity and autonomy to solve everyday problems.
  • Good communication skills to interact with different departments.
  • Organization and attention to detail.
  • Technical curiosity and a desire for continuous learning.
  • Ability to work collaboratively and as part of a team.
  • Advanced English skills.