Patreon

Senior Software Engineer, Data

Patreon

full-time

Posted on:

Location Type: Remote

Location: CaliforniaNew YorkUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $200,000 - $300,000 per year

Job Level

About the role

  • Design, build, and maintain the pipelines that power all data use cases. This includes ingestion of raw data from production databases, object storage, and message queues, and vendors into our Data Lake, and building core datasets and metrics.
  • Develop intuitive, performant, and scalable data models (facts, dimensions, aggregations) that support product features, internal analytics, experimentation, and machine learning workloads.
  • Implement robust batch and streaming pipelines using Spark, Python, and Airflow.
  • Build pipelines adhering to standards for accuracy, completeness, lineage, and dependency management. Build monitoring and observability so teams can trust what they’re using.
  • Work with Product, Data Science, Infrastructure, Finance, Marketing, and Sales to turn ambiguous questions into well-scoped, high-impact data solutions.
  • Pay down technical debt, improve automation, and follow best practices in data modeling, testing, and reliability. Mentor peers earlier in their career within the team.

Requirements

  • 4+ years of experience in software development, with at least 2+ years of experience in building scalable, production-grade data pipelines.
  • Familiarity with SQL and distributed data processing tools like Spark, Flink, Kafka Streams, or similar.
  • Strong programming foundations in Python or similar language, with good software engineering design patterns and principles (testing, CI/CD, monitoring).
  • Familiar with modern data lakes (eg: Delta Lake, Iceberg). Familiar with data warehouses (eg: Snowflake, Redshift, BigQuery) and production data stores such as relational (eg: MySQL, PostgreSQL), object (eg: S3), key-value (eg: DynamoDB) and message queues (eg: Kinesis, Kafka).
  • Excellent collaboration and communication skills; comfortable partnering with non-technical stakeholders, writing crisp design docs, giving actionable feedback, and can influence without authority across teams.
  • Understanding of data modeling and metric design principles.
  • Passionate about data quality, system reliability, and empowering others through well-crafted data assets.
  • Highly motivated self-starter who thrives in a collaborative, fast-paced environment and takes pride in high-craft, high-impact work.
  • Bachelor’s degree in Computer Science, Computer Engineering, or a related field, or the equivalent.
Benefits
  • 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data pipelinesdata modelingPythonSQLSparkAirflowCI/CDdata qualitytestingreliability
Soft Skills
collaborationcommunicationmentoringinfluencingproblem-solvingself-starteradaptabilityattention to detailfeedbackteamwork
Certifications
Bachelor’s degree in Computer ScienceBachelor’s degree in Computer Engineering