SS&C Technologies

Principal Data Pipeline Lead

SS&C Technologies

full-time

Posted on:

Location Type: Office

Location: HyderabadIndia

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Lead the development of batch and real-time data pipelines on top of a modern data platform
  • Design and build scalable ingestion and transformation pipelines
  • Mentor a small team of engineers
  • Collaborate with platform engineering team to build pipelines
  • Implement CDC pipelines using Debezium and Kafka
  • Build streaming pipelines using Kafka and Apache Flink
  • Develop transformation workflows using Python, Spark / PySpark, and Airflow
  • Ingest data from DB2 replication streams
  • Process legacy fixed-width and CSV data feeds
  • Integrate API-based data sources
  • Store and manage data using Apache Iceberg and Parquet
  • Enable analytics through Trino and StarRocks

Requirements

  • 8+ years building data platforms or large-scale data pipelines
  • Strong programming experience in Python
  • Experience with Spark / PySpark
  • Experience building pipelines with Apache Airflow
  • Experience with Kafka-based streaming architectures
  • Experience implementing CDC pipelines (Debezium or similar)
  • Experience with Apache Flink or other streaming frameworks
  • Experience with Parquet and modern table formats such as Apache Iceberg
  • Experience with distributed query engines such as Trino, Presto, or StarRocks
  • Experience integrating data from heterogeneous or legacy systems
  • Experience leading or mentoring engineers
Benefits
  • Competitive salary
  • Opportunities for increased leadership scope as the team expands
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
PythonSparkPySparkApache AirflowKafkaDebeziumApache FlinkApache IcebergParquetTrino
Soft Skills
mentoringcollaborationleadership