Tech Stack
AWSCloudEC2JavaPySparkPythonSQL
About the role
- Design, build, and maintain scalable data pipelines using PySpark and AWS services.
- Develop and deploy cloud infrastructure using AWS Glue, EMR, Lambda, Athena, S3, VPC, and EC2.
- Collaborate with cross-functional teams to support data-driven decision-making.
- Use GitHub for version control and project collaboration.
- Contribute to the continuous improvement of processes and infrastructure automation.
- Participate in Agile development practices and daily team collaboration sessions.
- Work on innovative data solutions supporting key business functions with focus on cloud architecture, data processing, and infrastructure deployment.
- Relocate to Málaga, Spain to join international team.
Requirements
- Minimum 2–3 years of experience in a similar Data Engineering or Cloud Development role.
- Proficiency in Python and PySpark.
- Experience with AWS services: Glue, EMR, Lambda, Athena, S3, VPC, EC2.
- Familiarity with GitHub for source control.
- Strong English communication skills – minimum B2 level, with high oral comprehension.
- A collaborative, proactive, and adaptable mindset.
- Commitment to high-quality work and continuous improvement.
- Advantageous: Experience with QlikSense or other data visualisation tools.
- Advantageous: Understanding of CI/CD pipelines for AWS infrastructure deployment.
- Advantageous: Knowledge of Java and SQL.
- Willingness and ability to relocate to Málaga, Spain.
- Access to a reliable laptop and internet connection during the transition phase.