Tech Stack
AirflowApacheCloudKubernetesLinuxPythonScalaSparkSQLTableauTerraform
About the role
- Lead client projects to ensure high data quality and deliver scalable solutions
- Develop, maintain, and evolve data pipelines and cloud platforms using DataOps and Data Engineering principles
- Participate in architectural decisions, address technical debt, and engage in agile methodologies
- Design and model secure, reliable, and scalable data architecture solutions
- Engage in internal training and alignment activities to foster continuous growth
- Manage large volumes of data, creating solutions and generating insights for clients
Requirements
- 3+ years of experience in Data Engineering or a related field
- Bachelor’s degree in a STEM or related field (or equivalent industry experience)
- Proficiency in building and maintaining data pipelines (Tools, Python, Scala)
- Experience in SQL and Python development
- Experience with Linux environments and cloud services
- Familiarity with CDC ingestion tools (e.g., Debezium, DMS) and data orchestration tools (e.g., Airflow, NiFi)
- Hands-on experience with Kubernetes and Apache Spark
- Experience with infrastructure development tools like Terraform or CloudFormation (preferred)
- Strong organizational and multitasking skills (preferred)
- Experience working in tech startups or with BI tools (e.g., PowerBI, Tableau, Metabase) (preferred)
- Ability to process and transfer data via custom scripts (e.g., APIs, files) (preferred)
- Knowledge of relational and non-relational databases (preferred)
- Advanced English (preferred)