Tech Stack
AirflowAmazon RedshiftApacheAWSAzureBigQueryCloudDockerETLJenkinsMySQLNumpyPandasPostgresPythonSparkSQL
About the role
- Assist in the design and implementation of scalable data pipelines (ETL/ELT)
- Collaborate with senior engineers to maintain high-performance data architectures
- Develop applications, APIs, and pipelines for data ingestion, processing, and consumption
- Work with AWS and Azure data services (data warehouses, data lakes, streaming)
- Support automation and optimization of data workflows with monitoring and observability
- Collaborate with Data Scientists, Analysts, and Product teams to integrate analytics and ML models into production
- Participate in code reviews and learn best practices in software and data engineering
- Stay up-to-date with new technologies (Data Mesh, RAG, MLOps, etc.) and propose innovative solutions
Requirements
- 2+ years of experience in Data Engineering or Software Development with a data focus
- Proficiency in Python (Pandas, NumPy, SQLAlchemy) and SQL
- Familiarity with Apache Spark, Airflow, or other data orchestration tools
- Experience with relational databases (PostgreSQL, MySQL, SQL Server) and/or data warehouses (Snowflake, BigQuery, Redshift)
- Exposure to cloud data ecosystems (AWS or Azure)
- Basic knowledge of DevOps tools: Docker, CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions)
- Bachelor’s degree in Engineering, Computer Science, or related fields
- Fluent in technical English (reading/writing)
- Flexible hybrid work: two days in the office, three from home
- Free lunch on office days
- Fresh fruit, coffee, tea, and snacks
- Office activities
- 30 days working from abroad
- Birthday day off
- 23 vacation days
- 2 personal days
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringsoftware developmentPythonSQLApache SparkAirflowrelational databasesdata warehousesDevOpsdata pipelines
Soft skills
collaborationcommunicationproblem-solvingadaptabilityinnovation