Tech Stack
AngularAnsibleAWSCloudCyber SecurityHadoopJavaJavaScriptJenkinsKafkaKubernetesMicroservices.NETNode.jsOpenShiftPythonReactSparkTerraform
About the role
- Design, implement, manage, and monitor end-to-end IT solutions for Advanced Analytics and Data Replication (CDC) platforms
- Ensure the seamless operation of services within SLA performance parameters, with a focus on Databricks and cloud-based solutions
- Innovate and optimize IT services and systems, emphasizing cloud solutions such as Databricks and AWS
- Implement Data Products in a Data Mesh architecture
- Support and maintain existing applications, ensuring they are running efficiently and effectively
- Work closely with the Data team to transition data pipelines into optimized production-ready states for various workflows
- Administer and manage Data environments, tech stack, and traditional databases
- Implement automation and CI/CD practices using tools such as Git, Jenkins/GitHub Actions, and Ansible/Terraform
- Participate in diagnosing and solving complex problems, documenting configurations to maintain best practices across IT infrastructure.
Requirements
- Bachelor’s or master’s degree in computer science, Engineering, or a related technical field
- Minimum 3 years of experience in Data Engineering, Data DevOps, or a relevant field, preferably in the banking or telecommunications industry
- Advanced knowledge in specific Data Engineering technologies: Spark, Hadoop, Hive, Python, R, Databricks, Kafka, Debezium, Golden Gate (2 or more preferred)
- Strong understanding of networking and microservices architectures like Kubernetes/EKS/OpenShift
- Experience with Cloud architecture and Big Data solutions, particularly with Databricks and AWS
- Automation & CI/CD tools (Git, Jenkins/Github Actions, Ansible/Terraform, Shell/Python)
- Excellent problem-solving skills with the ability to collaborate effectively with various teams
- Professional certifications would be a plus, especially if you have practical experience with the technologies