Tech Stack
ETLPySparkPythonSQLTerraform
About the role
- Develop and maintain platform tools for deployment and automation of data products.
- Partner with value stream teams to adopt Data Mesh principles, enabling them to own and scale their data products.
- Design and optimize ingestion and transformation pipelines and jobs using SQL, Python, and PySpark.
- Apply infrastructure-as-code (e.g., Terraform) to automate platform provisioning.
- Contribute to and improve CI/CD workflows for data pipelines and platform services.
- Ensure our data platform is reliable and efficient at scale.
- Collaborate across our distributed team to continuously improve platform reliability, performance, and cost-effectiveness.
Requirements
- 3+ years of experience in data engineering and DevOps practices
- Knowledge of SQL, Python, and PySpark in data-intensive environments.
- Understanding of ETL concepts, Medallion Architecture, data modeling, and data warehousing.
- Strong communication skills to work effectively in a remote, distributed team.
- We pride ourselves on making a difference, for our employees, clients, and their businesses.
- We accept team members for who they are and what they bring to the table.
- We are proud to build all our relationships based on transparency and trust.
- We are a team of energetic and curious individuals passionate about the work we do every day!
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
SQLPythonPySparkTerraformCI/CDETLdata modelingdata warehousingdata ingestiondata transformation