Implement and maintain deployment and ETL pipelines for data products
Integrate with a multitude of products and APIs to ensure superior development, testing, and release experience for developers
Integrate diverse data sources and vendor products, including databases, APIs, and third-party services to make data available for analytical and operational use
Automate repetitive and complex ETL deployment tasks to improve team efficiency and reduce manual intervention
Implement monitoring solutions to track data pipeline performance and quickly identify and resolve issues
Perform code reviews and ensure adherence to best practices and standards
Develop, deploy, and maintain scalable data pipelines for large volumes of data
Participate in the team's support rotation
Adhere to the data team's established development and CI/CD practices.
Requirements
2+ years of relevant experience
Strong programming skills in Python or Java
Proficiency with SQL and relational databases
Experience with Snowflake and their services
Experience with CI/CD engines (Harness, Circle CI, Argo CI/CD, Jenkins)
Experience with source control systems (Git)
Ability to work independently and manage multiple tasks
Thoughtful approach to collaboration, design, and decision-making that prioritizes equity, access, and continuous learning.
Commitment to creating inclusive, respectful environments where all voices are valued and supported.
Benefits
Flexible work week
401k/RRSP matching
Mental health support
Paid sabbaticals
Generous parental leave
Flexible work options
Competitive benefits
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.