Salary
💰 $170,000 - $220,000 per year
Tech Stack
AirflowCloudETLKafkaPythonSQL
About the role
- Build and maintain scalable data pipelines and infrastructure across batch and streaming systems.
- Support and improve core components of Imprint's data stack, including Snowflake, dbt Cloud, Change Data Capture frameworks, and reverse ETL integrations.
- Implement data modeling best practices and contribute to testing, observability, and governance initiatives.
- Collaborate with stakeholders across Product, Analytics, Finance, and Engineering to ensure timely and accurate data delivery.
- Assist with external data integrations, such as partner-facing data shares (e.g., S3, SFTP, Snowflake) and financial reporting pipelines (e.g., with Netsuite).
- Participate in design discussions for scaling data infrastructure, including schema design, orchestration, and data lineage.
- Create clear documentation and ensure reproducibility for datasets and workflows you develop.
- Learn about modern data tools and trends and suggest improvements to existing processes.
Requirements
- 3+ years of experience in data engineering, analytics engineering, or related roles
- Solid experience with Snowflake and dbt, with an understanding of dimensional modeling and data warehouse concepts
- Hands-on experience with ETL/ELT pipelines and familiarity with orchestration frameworks (e.g., dbt Cloud, Airflow)
- Working knowledge of data integration tools like Fivetran or similar Change Data Capture solutions
- Strong SQL skills and proficiency in Python or a similar programming language
- Experience building and maintaining production data systems with guidance from senior team members
- A detail-oriented mindset and enthusiasm for building clean, maintainable data systems
- Strong communication skills and ability to work effectively with cross-functional partners
- Legally authorized to work in the United States (application requests authorization and visa sponsorship status)
- Nice to have: Experience in fintech, high-growth startups, or customer-facing data products; familiarity with event streaming technologies like Kafka or Kinesis; exposure to data governance, security, or compliance practices; interest in ML pipelines or experimentation frameworks