Tech Stack
AWSAzureCloudDockerETLGoogle Cloud PlatformHadoopKubernetesPythonSQL
About the role
- Design and implement data pipelines using SAP Data Intelligence (DI) to orchestrate data flows across heterogeneous systems (SAP and non-SAP).
- Integrate structured and unstructured data from sources like SAP S/4HANA, SAP BW/4HANA, SAP HANA, Azure, AWS, Hadoop, REST APIs, etc.
- Leverage SAP DI operators, pipelines, and metadata explorers to design modular and reusable data processes.
- Enable and support data preparation, data profiling, and data lineage for analytical and AI/ML initiatives.
- Collaborate with data scientists, analysts, and business users to understand data requirements and ensure effective data delivery.
- Monitor and optimize performance of data pipelines and troubleshoot issues related to connectivity, transformation, and orchestration.
- Ensure data governance, quality, and compliance using SAP DI’s data management capabilities.
- Maintain documentation for data flows, transformations, and architecture.
Requirements
- Bachelor’s/Master’s degree in Computer Science, Information Systems, Engineering, or related field.
- 3–8 years of experience in data engineering or data integration, with at least 1–2 years hands-on experience in SAP Data Intelligence (version 3.x or later).
- Proficient in data integration concepts, ETL/ELT pipelines, and metadata management.
- Knowledge of SAP systems such as S/4HANA, BW/4HANA, SAP HANA.
- Experience working with cloud platforms (Azure, AWS, GCP) and Big Data technologies is a plus.
- Strong understanding of data governance, security, and compliance frameworks.
- Hands-on experience with Python, SQL, and Docker containers.
- Experience in working with meta data explorer.
- Familiarity with Kubernetes, Git, and CI/CD pipelines is desirable.
- Excellent communication and stakeholder management skill