Fountain

Senior Data Engineer

Fountain

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇪🇸 Spain

Visit company website
AI Apply
Apply

Job Level

Senior

Tech Stack

AirflowAmazon RedshiftAWSAzureBigQueryCloudETLGoogle Cloud PlatformKafkaMongoDBNoSQLPostgresPythonSQLTerraform

About the role

  • Build, maintain, and optimize data pipelines and ETL processes to move data from Postgres and MongoDB into our Iceberg data lake on S3 and ClickHouse Cloud via change data capture (CDC) using Debezium or ClickPipes.
  • Collaborate with senior engineers to orchestrate transformations using Dagster, including dbt runs, custom Python ETLs, and scheduled jobs.
  • Develop and maintain dbt models across multiple warehouses (ClickHouse, BigQuery, Snowflake, Redshift) to power embedded analytics, internal analytics, and customer-facing integrations.
  • Work with cross-functional teams to gather data requirements, test transformations, and deliver high-quality datasets for analytics and product features.
  • Assist in migrating from Fivetran to a Kafka-based streaming architecture, including configuring Kafka and Debezium connectors.
  • Participate in implementing data retention, GDPR compliance, anonymization, and backup workflows across our data lake and warehouse layers.
  • Monitor pipeline health, troubleshoot issues, and optimize query performance in ClickHouse, Snowflake, BigQuery, and Redshift.
  • Contribute to infrastructure-as-code practices using Terraform (or similar tools) to standardize deployments and manage environments across AWS, GCP, and Azure.

Requirements

  • 5+ years of professional experience in data engineering, ETL, or similar roles.
  • Proficiency in SQL and Python, with experience using dbt and an orchestration framework such as Dagster, Airflow, or Prefect.
  • Experience with relational databases (Postgres/Aurora) and NoSQL databases (MongoDB).
  • Familiarity with data lakes and data warehouse technologies such as Iceberg, ClickHouse, BigQuery, Snowflake, and Redshift.
  • Exposure to streaming and CDC technologies like Kafka, Debezium, and change data capture pipelines.
  • Understanding of data modeling, incremental design, and performance optimization.
  • Knowledge of cloud platforms (AWS, GCP, Azure) and storage services (S3, GCS, Azure Storage).
  • Experience managing infrastructure using Terraform or similar infrastructure-as-code tooling.
  • Experience with version control and collaboration using Git.
  • Strong communication skills and the ability to work collaboratively across teams.
  • A proactive, curious attitude with a desire to learn and grow in a fast-paced environment.
Benefits
  • Flexible vacation policy
  • Paid holidays
  • Monthly lunch stipends
  • Annual allowances for ongoing education related to your profession
  • Career advancement opportunities
  • Home office reimbursements
  • Cell phone reimbursements
  • Wellness reimbursements

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
SQLPythondbtETLdata modelingperformance optimizationchange data captureinfrastructure-as-codestreaming architecturedata pipeline optimization
Soft skills
strong communication skillscollaborative workproactive attitudecuriositydesire to learnability to work in fast-paced environment
Contentsquare

Senior Data Engineer – Enterprise Analytics

Contentsquare
Seniorfull-time🇪🇸 Spain
Posted: 7 days agoSource: jobs.lever.co
AirflowCloudPythonSQL
NEORIS

Data Engineer, PySpark

NEORIS
Junior · Midfull-time🇪🇸 Spain
Posted: 7 days agoSource: boards.greenhouse.io
PySpark
Devoteam

GCP Data Engineer – Sector Bancario

Devoteam
Mid · Seniorfull-time🇪🇸 Spain
Posted: 9 days agoSource: jobs.smartrecruiters.com
AirflowBigQueryCloudGoogle Cloud PlatformSQL