Tech Stack
Amazon RedshiftAWSCloudETLPySparkPythonScalaSparkSQLTerraform
About the role
- Design and build next-gen data platforms and pipelines using Python, PySpark, or Scala
- Integrate data from diverse sources into centralised lakes and cloud platforms
- Create technology blueprints and engineering roadmaps for long-term transformation
- Ensure data security, governance, and compliance
- Deliver end-to-end solutions that meet business needs and minimise risk
- Build high-quality, scalable data products from scratch
- Communicate effectively with stakeholders and contribute to team-wide engineering practices
Requirements
- 5+ years of experience in Data Engineering
- AWS Certified Data Engineer Associate
- Strong expertise in Python, PySpark, SQL, Spark, Scala
- Experience in AWS data platforms (Glue, Lambda, Redshift, Athena, Kinesis, EMR)
- Data Governance & Security experience with AWS DataZone, Lake Formation, IAM
- Experience in using AI Powered coding Assistants
- Exposure to AWS Marketplace, API Gateway, and data monetization strategies
- Experience with Terraform, CloudFormation, GitOps, and serverless frameworks
- Ability to fine-tune ETL jobs, query performance, and cost efficiencies
- Knowledge of data product thinking and data mesh principles
- Flexible working hours
- Professional development opportunities
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
PythonPySparkSQLSparkScalaETLdata product thinkingdata mesh principlesdata governancedata security
Soft skills
communicationstakeholder engagementteam collaboration
Certifications
AWS Certified Data Engineer Associate