Keyrus

Data Engineer

Keyrus

full-time

Posted on:

Location Type: Hybrid

Location: BordeauxFrance

Visit company website

Explore more

AI Apply
Apply

About the role

  • Take part in large-scale projects by building and optimizing modern data platforms and pipelines.
  • Design, develop, and productionize reliable, high-performance data pipelines.
  • Integrate structured and unstructured data, both batch and real-time.
  • Deploy and optimize data environments on the Cloud (AWS, Azure, GCP).
  • Implement and orchestrate ETL/ELT processes (Talend, Airflow, dbt, Fivetran, etc.).
  • Work with modern platforms such as Databricks, Snowflake, BigQuery, and Redshift.
  • Ensure data quality, security, and governance.

Requirements

  • Strong technical expertise in data ingestion, processing, and optimization.
  • Proficiency in Python and SQL (Scala or Java is a plus).
  • Experience with frameworks and tools: Spark, Kafka, Databricks, dbt, Airflow.
  • Familiarity with Cloud environments: AWS, Azure, or GCP.
  • Databases: relational (PostgreSQL, MySQL) and cloud data warehouses (Snowflake, BigQuery, Redshift).
  • ETL/ELT experience: Talend, Fivetran, or equivalents.
  • Bonus: experience in Machine Learning, real-time processing, or data mesh is a strong advantage.
Benefits
  • A team of passionate experts: Join an environment where innovation and excellence are at the heart of every project.
  • Large-scale projects: At Keyrus, you will work on digital transformation and data management initiatives for major companies—high-impact projects!
  • A unique working environment: We value innovation, diversity, and international mobility.
  • An inclusive corporate culture: Diversity is our strength, and we firmly believe everyone has a role to play in innovation and transformation.
  • Our commitment to inclusion: All our positions are open to people with disabilities.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data ingestiondata processingdata optimizationPythonSQLSparkKafkaETLELTMachine Learning