Fountain

Senior Data Engineer

Fountain

full-time

Posted on:

Location Type: Remote

Location: Remote • 🇬🇧 United Kingdom

Visit company website
AI Apply
Apply

Job Level

Senior

Tech Stack

AirflowAmazon RedshiftAWSAzureBigQueryCloudETLGoogle Cloud PlatformKafkaMongoDBNoSQLPostgresPythonSQLTerraform

About the role

  • Build, maintain, and optimize data pipelines and ETL processes to move data from Postgres and MongoDB into our Iceberg data lake on S3 and ClickHouse Cloud via change data capture (CDC) using Debezium or ClickPipes.
  • Collaborate with senior engineers to orchestrate transformations using Dagster, including dbt runs, custom Python ETLs, and scheduled jobs.
  • Develop and maintain dbt models across multiple warehouses (ClickHouse, BigQuery, Snowflake, Redshift) to power embedded analytics, internal analytics, and customer-facing integrations.
  • Work with cross-functional teams to gather data requirements, test transformations, and deliver high-quality datasets for analytics and product features.
  • Assist in migrating from Fivetran to a Kafka-based streaming architecture, including configuring Kafka and Debezium connectors.
  • Participate in implementing data retention, GDPR compliance, anonymization, and backup workflows across our data lake and warehouse layers.
  • Monitor pipeline health, troubleshoot issues, and optimize query performance in ClickHouse, Snowflake, BigQuery, and Redshift.
  • Contribute to infrastructure-as-code practices using Terraform (or similar tools) to standardize deployments and manage environments across AWS, GCP, and Azure.

Requirements

  • 5+ years of professional experience in data engineering, ETL, or similar roles.
  • Proficiency in SQL and Python, with experience using dbt and an orchestration framework such as Dagster, Airflow, or Prefect.
  • Experience with relational databases (Postgres/Aurora) and NoSQL databases (MongoDB).
  • Familiarity with data lakes and data warehouse technologies such as Iceberg, ClickHouse, BigQuery, Snowflake, and Redshift.
  • Exposure to streaming and CDC technologies like Kafka, Debezium, and change data capture pipelines.
  • Understanding of data modeling, incremental design, and performance optimization.
  • Knowledge of cloud platforms (AWS, GCP, Azure) and storage services (S3, GCS, Azure Storage).
  • Experience managing infrastructure using Terraform or similar infrastructure-as-code tooling.
  • Experience with version control and collaboration using Git.
  • Strong communication skills and the ability to work collaboratively across teams.
  • A proactive, curious attitude with a desire to learn and grow in a fast-paced environment.
Benefits
  • competitive health plans and a retirement plan
  • flexible vacation policy
  • paid holidays
  • monthly lunch stipends
  • annual allowances for ongoing education relating to your profession and career advancement
  • home office, cell phone, and wellness reimbursements

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
SQLPythondbtETLdata modelingperformance optimizationchange data captureinfrastructure-as-codestreaming architecturedata pipeline optimization
Soft skills
strong communication skillscollaborative workproactive attitudecuriositydesire to learnability to work in fast-paced environment
Pixaera

Data Engineer

Pixaera
Mid · Seniorfull-time🇬🇧 United Kingdom
Posted: 1 day agoSource: jobs.ashbyhq.com
AWSPythonSQL
TEKenable Ltd

Senior Data Engineer, ADF, Databricks

TEKenable Ltd
Seniorfull-time🇬🇧 United Kingdom
Posted: 7 days agoSource: apply.workable.com
AzureETLPythonSQLTableau