Zendesk

Data Engineer II, AI Agents

Zendesk

full-time

Posted on:

Origin:  • 🇩🇪 Germany

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowApacheAWSBigQueryCloudDistributed SystemsETLGoogle Cloud PlatformKafkaKubernetesMongoDBNoSQLPythonSparkSQLTerraformTypeScript

About the role

  • Design and build data pipelines, services, and infrastructure powering AI-driven insights.
  • Work at the intersection of product engineering, analytics, and AI to create robust, reliable, and scalable data systems.
  • Collaborate closely with data scientists, analysts, backend and frontend engineers, and product managers to design data models, define integration patterns, and optimize data workflows.
  • Support real-time and historical insights for users and enable intelligent, customer-facing features.
  • Deliver clean, scalable, and reliable data solutions; write well-tested, well-documented code; improve performance and reliability of data systems.
  • Participate in architecture discussions and help define data standards, schemas, and contracts.
  • Contribute to planning, reviews, team goals, and knowledge sharing.

Requirements

  • 3+ years of experience designing and implementing data pipelines and systems in a production environment.
  • Proficiency with SQL, DBT and at least one general-purpose programming language such as Python.
  • Experience with batch and stream processing frameworks (e.g., Apache Flink, Apache Spark, Apache Beam, or equivalent).
  • Experience with orchestration tools (e.g., Apache Airflow)
  • Familiarity with event-driven data architectures and messaging systems like Pub/Sub, Kafka, or similar.
  • Strong understanding of data modeling and database design, both relational and NoSQL.
  • Experience building and maintaining ETL/ELT workflows that are scalable, testable, and observable.
  • Product mindset — care about the quality, usability, and impact of the data.
  • Strong communication and collaboration skills.
  • Curiosity, humility, and a drive for continuous learning.
  • A Big Plus: experience with cloud-based data platforms (GCP or AWS preferred); familiarity with Looker or other analytics/BI tools; experience with feature stores or ML workflows; understanding CI/CD and infrastructure-as-code like Terraform; comfortable with large-scale distributed systems and production debugging.
Explorium

Data Engineer

Explorium
Mid · Seniorfull-time🇮🇱 Israel
Posted: 6 days agoSource: www.comeet.com
AirflowAWSAzureCloudETLGoogle Cloud PlatformNoSQLPythonSparkSQL
GeneDx

Senior Data Warehouse Engineer

GeneDx
Seniorfull-time$153k–$191k / year🇺🇸 United States
Posted: 7 days agoSource: boards.greenhouse.io
AirflowAWSCloudETLGoogle Cloud PlatformJavaKubernetesPythonScalaSparkSQLTerraform
Wand AI

Data Experience Software Engineer @ Wand Synthesis AI Inc.

Wand AI
Mid · Seniorfull-time🇺🇸 United States
Posted: 26 days agoSource: www.comeet.com
AirflowAWSAzureCloudDockerETLFlaskGoogle Cloud PlatformKafkaKubernetesMicroservicesPython+1 more
Northbeam

Senior Software Engineer, Python, Data & Infrastructure

Northbeam
Seniorfull-time$170k–$200k / year🇺🇸 United States
Posted: 27 days agoSource: boards.greenhouse.io
AirflowAWSBigQueryCloudDistributed SystemsDockerERPETLGoogle Cloud PlatformGraphQLKubernetesPython+2 more
toogeza

Data Engineer/Developer

toogeza
Mid · Seniorfull-time🇺🇦 Ukraine
Posted: 5 days agoSource: jobs.ashbyhq.com
AirflowAWSBigQueryCloudDockerETLGoogle Cloud PlatformKubernetesLinuxPythonSQLTerraform