Design, build, and maintain scalable, cloud-native data pipelines and ETL workflows using tools such as Apache Spark, AWS Glue, and Snowflake.
Solution, develop, and rigorously test data products that support analytics, operational reporting, and real-time decision-making.
Develop and deploy high-quality backend applications using Python, SQL, and Scala or other languages commonly used in data engineering environments.
Build and maintain data platforms and structures such as data lakes, data warehouses, and APIs to support both real-time and batch use cases.
Partner with cross-functional teams (data scientists, software engineers, product, operations, and design) to deliver data-driven business solutions.
Build, optimize, and support CI/CD pipelines, infrastructure as code (IaC), and deployment automation using Docker, Kubernetes, GitHub Actions, or similar tools.
Develop clean, maintainable, and well-documented code, following best practices in testing (TDD), version control, and observability.
Ensure data quality and integrity through automated validation frameworks and modern monitoring practices.
Mentor junior engineers and interns, particularly in areas such as data platform architecture, data product development, and engineering best practices.
Requirements
Bachelor’s degree in Computer Science, Software/Data Engineering, or related discipline—or equivalent practical experience.
7+ years of experience designing, building, and supporting large-scale, production-grade software or data engineering systems.
Proven track record of solutioning and delivering end-to-end data products—from design and development through testing and deployment.
Hands-on experience with cloud platforms like AWS, GCP, or Azure, especially for data storage, processing, and orchestration.
Proficiency with tools such as Snowflake, Databricks, PostgreSQL, and event-streaming platforms like Kafka.
Strong knowledge of containerization (Docker, Kubernetes/OpenShift) and DevOps principles.
Experience building RESTful APIs, event-driven pipelines, and integrating third-party systems and services.
Experience with SAP data extraction and data model, specifically with SAP S4 and Hana, integrating through SAP Datasphere or similar sources.
Experience with infrastructure as code tools like Terraform or CloudFormation, and automation of CI/CD pipelines.
Deep understanding of distributed systems, data governance, and scalable architecture patterns.
Excellent communication skills, strong documentation practices, and a collaborative mindset with a passion for mentoring others.
Benefits
Medical, dental, vision, and life insurance plans with coverage starting on day one of employment and 6 free sessions each year with a licensed therapist to support your emotional wellbeing.
18 paid time off (PTO) days annually for full-time employees (accrual prorated based on employment start date) and 6 company holidays per year.
6% company contribution to a 401(k) Retirement Savings Plan each pay period, no employee contribution required.
Employee discounts, tuition reimbursement, student loan refinancing and free access to financial counseling, education, and tools.
Maternity support programs, nursing benefits, and up to 14 weeks paid leave for birth parents and up to 4 weeks paid leave for non-birth parents.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.