Salary
💰 CA$200,000 - CA$225,000 per year
Tech Stack
AWSAzureCloudDockerETLFlaskGoGoogle Cloud PlatformKubernetesPythonTerraform
About the role
- Architect and maintain Python-based services using Flask and modern data frameworks for pipeline and workflow implementations
- Build and scale secure, well-structured API endpoints that interface with data stores, processing engines, and downstream applications
- Implement advanced data orchestration logic, ETL/ELT strategies, and tool chaining for complex data workflows
- Design and optimize data pipelines, including data loaders, transformation strategies, and integration with search systems like OpenSearch
- Develop and maintain ML data processing pipelines for ingesting, transforming, and serving data across various storage systems
- Containerize data services using Docker and implement scalable deployment strategies with Kubernetes
- Collaborate with engineering teams to productionize data models and processing workflows
- Optimize data processing techniques for improved performance, reliability, and cost efficiency
- Set up robust test coverage, monitoring, and CI/CD pipelines for data-powered backend services
- Stay current with emerging trends in data engineering, pipeline architectures, Agent architecture and data systems
Requirements
- 3+ years of experience as a full-stack engineer with strong Python expertise
- Hands-on experience building data pipelines and processing architectures in production
- Proficiency with data orchestration frameworks and ETL/ELT tools
- Experience with databases, data modeling, and search implementations
- Strong knowledge of data processing optimization and performance tuning
- Experience with cloud platforms (AWS/GCP/Azure) for data workload deployment
- Proficiency with Docker and Kubernetes for containerizing and orchestrating applications
- Comfortable with modern data tooling and monitoring systems
- Track record of building end-to-end data systems at scale
- Deep full-stack development expertise and understanding of modern data processing patterns
- (Bonus) Experience with Go programming language
- (Bonus) Experience with OpenTelemetry (OTel) for observability and monitoring
- (Bonus) Contributions to open-source data engineering projects
- (Bonus) Published research or blog posts on data engineering, pipelines, or data systems
- (Bonus) Experience with data observability, stream processing, and real-time data systems