FastSpring

Staff Data Engineer

FastSpring

full-time

Posted on:

Location Type: Remote

Location: United States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $190,000 - $212,000 per year

Job Level

About the role

  • Lead the design and construction of a next-generation data warehouse optimized for improved efficiency, performance, and massive scale.
  • Build and maintain robust data pipelines for real-time ingestion from large-scale production transactional databases containing hundreds of millions of rows.
  • Develop real-time data transformation layers to ensure the data warehouse reflects live business activity, enabling "up-to-the-minute" analytics for our ecosystem.
  • Partner with the Senior Data Product Manager to define, evangelize, and execute a data roadmap that meets both internal and external customer needs.
  • Research and select appropriate technical solutions, coordinate delivery with dedicated agile teams, and champion engineering best practices.
  • Rigorously test and validate data integrity, establishing and measuring success metrics for system performance and data reliability.
  • Work closely with stakeholders to develop technical business cases for new initiatives, ensuring alignment across the development organization.

Requirements

  • 5+ years of data engineering experience in a high-growth SaaS environment.
  • Expert-level experience with message brokers and streaming platforms such as Apache Kafka, Amazon Kinesis, or Google Pub/Sub to manage high-throughput transactional data.
  • Proficiency with real-time and batch transformation tools, specifically dbt (data build tool) for managing Snowflake logic and Apache Flink or Spark Streaming for in-flight processing.
  • Deep architectural knowledge of Snowflake, including experience with Snowpipe for continuous ingestion, streams, and tasks for change data capture (CDC).
  • Strong understanding of schemas and data pipelines, with specific experience extracting data from large-scale relational databases (Postgres/MySQL) via CDC tools like Debezium.
  • Mastery of SQL and Python for building automated, scalable data movement and transformation workflows.
  • Proven experience designing and maintaining Data APIs to expose warehouse insights to external or internal customers.
  • Familiarity with business intelligence tools such as Looker.
  • Exceptional ability to synthesize complex technical concepts and communicate them clearly to stakeholders, including the Senior Data Product Manager and executive leadership.
Benefits
  • Corporate bonus plan
  • Professional development opportunities
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data engineeringreal-time data transformationdata pipelinesSQLPythonApache KafkaAmazon KinesisGoogle Pub/SubSnowflakedbt
Soft Skills
communicationstakeholder engagementsynthesis of technical conceptscollaborationleadership