Liven

Senior Data Platform Engineer

Liven

full-time

Posted on:

Origin:  • 🇮🇩 Indonesia

Visit company website
AI Apply
Apply

Job Level

Senior

Tech Stack

AirflowAmazon RedshiftAWSAzureCloudDistributed SystemsDockerFluxGoogle Cloud PlatformGrafanaKafkaKubernetesPrometheusSparkTerraform

About the role

  • Own and operate the end-to-end data infrastructure, ensuring performance, reliability, and scalability.
  • Design and implement CI/CD pipelines specifically for data workflows and tooling.
  • Deploy and manage tools like Airbyte, Prefect, and Superset using Docker and Kubernetes.
  • Set up and maintain monitoring, secrets management, and alerting systems to ensure platform health and security.
  • Apply GitOps practices or tools like Argo Workflows for streamlined infrastructure deployments.
  • Manage and scale Kafka, Spark, or DuckDB clusters to support real-time and batch data workloads.
  • Explore and maintain self-hosted tools like dbt Cloud, ensuring smooth integration and performance.
  • Use Infrastructure-as-Code tools like Terraform or Helm to automate provisioning and configuration.
  • Administer observability stacks such as Grafana and Prometheus for infrastructure visibility.
  • Implement secure access control, role-based permissions, and ensure compliance with GDPR, HIPAA, and internal data governance standards.
  • Collaborate across teams to support data engineers, analysts, and developers with reliable infrastructure and workflow tooling.
  • Steer clear of proprietary infrastructure platforms like AWS Glue or Azure Synapse (we’re staying open-source/cloud-native for now).

Requirements

  • 5–8 years of experience in DataOps, DevOps, or Platform Engineering roles.
  • Proficiency with modern data stack components (e.g., Airflow, dbt, Kafka, Databricks, Redshift).
  • Solid understanding of cloud platforms (AWS or GCP).
  • Strong communication skills to collaborate across product, data science, and engineering teams.
  • Bias for ownership, automation, and proactive resolution.
  • Good to Have: Experience with Infrastructure-as-Code tools like Terraform or Helm for managing Kubernetes and cloud resources.
  • Good to Have: Familiarity with administering Grafana, Prometheus, or similar observability stacks.
  • Good to Have: Exposure to GitOps methodologies and tools like Argo CD or Flux.
  • Good to Have: Hands-on experience with self-hosted or hybrid setups of tools like dbt Cloud.
  • Good to Have: Understanding of auto-scaling strategies for distributed systems (Kafka, Spark, DuckDB).
  • Good to Have: Experience contributing to platform or DevOps initiatives in a data-heavy environment.
BioCatch

Data Engineer

BioCatch
Mid · Seniorfull-time🇮🇱 Israel
Posted: 17 days agoSource: www.comeet.com
AirflowAmazon RedshiftAzureBigQueryCloudDockerETLGoogle Cloud PlatformJenkinsKafkaKubernetesNeo4j+7 more
Two Circles

Platform Engineer, Contract

Two Circles
Mid · Seniorcontract🇬🇧 United Kingdom
Posted: 6 days agoSource: apply.workable.com
Amazon RedshiftAWSAzureCloudDistributed SystemsDNSDockerFirewallsGrafanaKubernetesLinuxPrometheus+3 more
Paradigm Health

Staff SW Engineer, DevOps/MLOps

Paradigm Health
Leadfull-time$180k–$220k / year🇺🇸 United States
Posted: 33 days agoSource: boards.greenhouse.io
AirflowApacheAWSCloudDistributed SystemsDockerETLGoogle Cloud PlatformGrafanaKubernetesPrometheusPython+5 more
Trustly

DataOps Engineer, Data Tools

Trustly
Mid · Seniorfull-time🇧🇷 Brazil
Posted: 21 days agoSource: jobs.lever.co
AirflowAmazon RedshiftAWSDockerEC2KafkaKubernetesPostgresPythonSparkSQLTerraform
JAGGAER

Architect, Data Scientist, Data & AI Team

JAGGAER
Senior · Leadfull-timeNorth Carolina · 🇺🇸 United States
Posted: 34 days agoSource: careers-jaggaer.icims.com
AirflowAmazon RedshiftAWSAzureDockerERPETLGoGoogle Cloud PlatformIoTKubernetesLinux+10 more