Tech Stack
ApacheAWSAzureETLGoogle Cloud PlatformHadoopPythonScalaSparkSQL
About the role
- Participate in challenging projects with major clients
- Develop and maintain ETL/ELT pipelines
- Work with ETL tools and Big Data processing
- Contribute to projects involving data integration, transformation, and loading
Requirements
- Experience developing and maintaining ETL/ELT pipelines (data extraction, transformation, and loading)
- Knowledge of relational databases and advanced SQL
- Experience with ETL tools such as PowerCenter / IICS, SAP Data Services (BODS), Talend, Pentaho, DataStage, Azure Data Factory, AWS Glue, dbt
- Experience with distributed processing and Big Data using Apache Spark, Databricks, Hadoop, or Hive
- Familiarity with programming languages for data pipelines, such as Python and/or Scala
- Experience with cloud environments (Azure, AWS, GCP) and source control (Git)
- Learning incentive programs (Udemy)
- Corporate English classes at affordable rates
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
ETLELTSQLApache SparkDatabricksHadoopHivePythonScalaData integration