Design and operate cloud-native data pipelines that transform complex operational data from legacy systems into modern, scalable products.
Build and maintain robust ETL/ELT pipelines that ingest, validate, and transform operational data from legacy systems.
Design and enforce platform data standards, including schema governance and reusable models.
Develop serverless workflows using AWS-native services like Step Functions, Lambda, S3, ECS, and Terraform.
Create automation tools that empower non-engineering teams to execute data migrations reliably.
Embed observability into pipelines with logging, error handling, rollback paths, and monitoring.
Collaborate with engineering, product, and client-facing teams to define platform standards and long-term vision.
Build reusable infrastructure to reduce complexity, improve data quality, and accelerate growth; help shape Platform team focused on schema management, validation frameworks, integrations, and automation.
Requirements
5+ years of experience building AWS-native data infrastructure using services like Step Functions, Lambda, S3, ECS, IAM, and Terraform.
Proven track record designing and delivering complex ETL/ELT pipelines for operational product data.
Strong proficiency in Python and SQL for data transformation and pipeline development.
Expertise in schema design, data modeling, validation frameworks, and scalable error handling.
Familiarity with PostgreSQL and Microsoft SQL Server (T-SQL), with experience bridging data models across systems.
Solid understanding of infrastructure-as-code (Terraform preferred) and DevOps automation practices.
Excellent communication and collaboration skills across technical and non-technical teams.
ATS Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.