Grainger

Senior Data Engineer

Grainger

full-time

Posted on:

Origin:  • 🇺🇸 United States • Illinois

Visit company website
AI Apply
Manual Apply

Salary

💰 $110,500 - $184,100 per year

Job Level

Senior

Tech Stack

ApacheAWSAzureCloudDistributed SystemsDockerETLGoogle Cloud PlatformKafkaKubernetesOpenShiftPostgresPythonScalaSparkSQLTerraform

About the role

  • Design, build, and maintain scalable, cloud-native data pipelines and ETL workflows using tools such as Apache Spark, AWS Glue, and Snowflake.
  • Solution, develop, and rigorously test data products that support analytics, operational reporting, and real-time decision-making.
  • Develop and deploy high-quality backend applications using Python, SQL, and Scala or other languages commonly used in data engineering environments.
  • Build and maintain data platforms and structures such as data lakes, data warehouses, and APIs to support both real-time and batch use cases.
  • Partner with cross-functional teams (data scientists, software engineers, product, operations, and design) to deliver data-driven business solutions.
  • Build, optimize, and support CI/CD pipelines, infrastructure as code (IaC), and deployment automation using Docker, Kubernetes, GitHub Actions, or similar tools.
  • Develop clean, maintainable, and well-documented code, following best practices in testing (TDD), version control, and observability.
  • Ensure data quality and integrity through automated validation frameworks and modern monitoring practices.
  • Mentor junior engineers and interns, particularly in areas such as data platform architecture, data product development, and engineering best practices.

Requirements

  • Bachelor’s degree in Computer Science, Software/Data Engineering, or related discipline—or equivalent practical experience.
  • 7+ years of experience designing, building, and supporting large-scale, production-grade software or data engineering systems.
  • Proven track record of solutioning and delivering end-to-end data products—from design and development through testing and deployment.
  • Hands-on experience with cloud platforms like AWS, GCP, or Azure, especially for data storage, processing, and orchestration.
  • Proficiency with tools such as Snowflake, Databricks, PostgreSQL, and event-streaming platforms like Kafka.
  • Strong knowledge of containerization (Docker, Kubernetes/OpenShift) and DevOps principles.
  • Experience building RESTful APIs, event-driven pipelines, and integrating third-party systems and services.
  • Experience with SAP data extraction and data model, specifically with SAP S4 and Hana, integrating through SAP Datasphere or similar sources.
  • Experience with infrastructure as code tools like Terraform or CloudFormation, and automation of CI/CD pipelines.
  • Deep understanding of distributed systems, data governance, and scalable architecture patterns.
  • Excellent communication skills, strong documentation practices, and a collaborative mindset with a passion for mentoring others.