Cuculus GmbH

Senior Data Engineer – Integration Hub, Data Pipelines

Cuculus GmbH

full-time

Posted on:

Location Type: Remote

Location: India

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Design, build, and maintain robust ETL/ELT data pipelines for batch and streaming workloads.
  • Implement data ingestion and transformation workflows using Apache Airflow, Apache NiFi, Apache Spark, and Kafka.
  • Integrate data from multiple sources including REST APIs, files, relational databases, message queues, and external SaaS platforms.
  • Optimize pipelines for performance, scalability, reliability, and cost efficiency.
  • Develop and operate a centralized data integration hub that supports multiple upstream and downstream systems.
  • Build reusable, modular integration components and frameworks.
  • Ensure high availability, fault tolerance, and observability of data workflows.
  • Design and manage data warehouses, data lakes, and operational data stores using PostgreSQL and related technologies.
  • Implement appropriate data modeling strategies for analytical and operational use cases.
  • Manage schema evolution, metadata, and versioning.
  • Implement data validation, monitoring, and reconciliation mechanisms to ensure data accuracy and completeness.
  • Enforce data security best practices, access controls, and compliance with internal governance policies.
  • Establish logging, alerting, and auditability across pipelines.
  • Automate data workflows, deployments, and operational processes to support scale and reliability.
  • Monitor pipelines proactively and troubleshoot production issues.
  • Improve CI/CD practices for data engineering workflows.
  • Work closely with data scientists, analysts, backend engineers, and business stakeholders to understand data requirements.
  • Translate business needs into technical data solutions.

Requirements

  • 5+ years of hands-on experience as a Data Engineer or in a similar role.
  • Proven experience as an individual contributor on at least three end-to-end data engineering projects, from design to production.
  • Strong hands-on experience with: Apache Airflow / Dagster, Apache NiFi, Apache Spark, Apache Kafka, PostgreSQL.
  • Extensive experience integrating data from APIs, files, databases, and third-party systems.
  • Strong SQL skills and experience with data modeling.
  • Solid programming experience in Python and/or Java/Scala.
  • Experience with Linux environments and version control systems (Git).
  • Strong problem-solving, debugging, and performance-tuning skills.
Benefits
  • 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
ETLELTdata pipelinesdata ingestiondata transformationdata modelingSQLPythonJavaScala
Soft Skills
problem-solvingdebuggingperformance-tuningcollaborationcommunication