Salary
💰 $137,000 - $188,000 per year
Tech Stack
CloudKafkaPandasPythonSparkSQL
About the role
- Leverage data engineering principles and technologies to build and maintain robust data solutions
- Design, build, and maintain scalable and reliable data pipelines and data models within Databricks and Snowflake
- Create robust, self-healing data pipelines through automation to reduce outages and recovery time
- Collaborate with data analysts, product managers, and other stakeholders in an Agile environment to understand requirements and provide progress updates
- Enforce best practices, coding standards, and ensure delivery of high-quality, maintainable code and data
- Stay abreast of industry trends and encourage continuous learning across the team
- Lead migration efforts from Snowflake to Databricks and support production rollouts
Requirements
- Bachelor's degree preferred (Business, Economics, Statistics, or Computer Science)
- 5+ years of experience in a data analytics or data engineering role (preferably in a SaaS business)
- High proficiency in SQL
- Python experience (especially data processing libraries like Pandas/Polars) and/or data streaming technologies (Kafka, Spark Streaming)
- Existing experience with Databricks (required)
- Knowledge of Snowflake (nice-to-have)
- Experience with data modeling frameworks, especially dbt Cloud
- Experience with documentation and version control via Github, Atlan, and/or Confluence (preferred)
- Proficient in Excel and Google Sheets (pivot tables, keyboard shortcuts, etc.)
- Expertise in data visualization tools with a preference for Looker with LookML
- Proven experience taking a leading role in data engineering projects from conception to production
- Strategic thinker with a proactive approach to identifying and solving business challenges
- Sense of ownership and ability to independently initiate and drive projects to completion
- Commitment to quality and best practices ("getting it right" as well as "getting it done")