DeepL

Senior Staff Data Engineer

DeepL

full-time

Posted on:

Location Type: Hybrid

Location: LondonUnited Kingdom

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Define and implement enterprise-wide Data engineering standards, strategies, and best practices for data solutions.
  • Provide expert guidance on technology selection, cloud services (AWS), and architectural decisions for data solutions.
  • Drive continuous improvements in efficiency, cost reduction, and innovation across data.
  • Evaluate and recommend tools, technologies, and frameworks to enhance our data capabilities.
  • Partner with and influence leaders from engineering, analytics, machine learning, and security teams to align on goals.
  • Mentor and be a thought leader across data, engineering and platform teams, fostering a culture of technical excellence.
  • Collaborate with cross-functional stakeholders to understand data requirements and translate them into technical solutions.
  • Work closely with customer-facing teams to ensure data solutions meet enterprise client needs.
  • Drive best practices in data security, governance, and compliance aligned with enterprise B2B standards.
  • Implement robust security measures for data at rest and in transit.

Requirements

  • Extensive Data expertise: 10+ years of experience in data engineering or related role with at least 5 years in a staff or principal role.
  • Data architecture experience: Deep understanding of data infrastructure, data warehousing, ETL/ELT processes, and/or data pipeline orchestration.
  • Cloud mastery: Proven experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data services.
  • Scripting & automation: Advanced scripting skills in Python, Bash, or similar languages for automation and tooling.
  • Leadership & communication: Proven track record of technical leadership, mentoring engineers, and influencing cross-functional teams.
  • Enterprise experience: Experience working in high-growth technology or SaaS environments with distributed systems and microservices architecture.
  • Experience with data-specific tools and technologies such as Apache Airflow, dbt, Apache Spark, Kafka, or similar.
  • Experience with real-time streaming data processing pipelines (spark, flink, etc.).
  • Knowledge of data warehousing solutions (Snowflake, BigQuery, Redshift) and data lake architectures.
  • Background in data engineering or analytics engineering.
Benefits
  • Diverse and internationally distributed team: joining our team means becoming part of a large, global community with people of more than 90 nationalities.
  • Open communication, regular feedback: as a language-focused company, we value the importance of clear, honest communication.
  • Hybrid work, flexible hours: we offer a hybrid work schedule, with team members coming into the office twice a week.
  • Virtual Shares - An ownership mindset in every role.
  • Regular in-person team events: we bond over vibrant events that are as unique as our team.
  • Monthly full-day hacking sessions: every month, we have Hack Fridays.
  • 30 days of annual leave: we value your peace of mind.
  • Competitive benefits: we've crafted it to reflect the diversity of our team.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringdata architectureETLELTdata pipeline orchestrationcloud platformsscriptingautomationdata warehousingdata lake architectures
Soft Skills
technical leadershipmentoringinfluencingcollaborationcommunication