General Motors

Senior Data Engineer – Intelligent Manufacturing

General Motors

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Salary

💰 $134,000 - $219,400 per year

Job Level

Senior

Tech Stack

AWSAzureCloudETLGoogle Cloud PlatformHadoopHBaseKafkaKubernetesNoSQLPythonScalaSparkSQL

About the role

  • Assemble large, complex data sets that meet functional and non-functional business requirements.
  • Identify, design, and implement process improvements, including automation, data delivery optimization, and infrastructure redesign for scalability.
  • Lead and deliver data-driven solutions across multiple languages, tools, and technologies.
  • Contribute to architecture discussions, solution design, and strategic technology adoption.
  • Build and optimize highly scalable data pipelines incorporating complex transformations and efficient code.
  • Design and develop new source system integrations from varied formats (files, database extracts, APIs).
  • Design and implement solutions for delivering data that meets SLA requirements.
  • Work with operations teams to resolve production issues related to the platform.
  • Apply best practices such as Agile methodologies, design thinking, and continuous deployment.
  • Develop tooling and automation to make deployments and production monitoring more repeatable.
  • Collaborate with business and technology partners, providing leadership, best practices, and coaching.
  • Mentor peers and junior engineers; educate colleagues on emerging industry trends and technologies.

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, or related field, or equivalent experience
  • 7+ years of data engineering/development experience, including Python or Scala, SQL, and relational/non-relational data storage. (ETL frameworks, big data processing, NoSQL)
  • 3+ years of distributed data processing (Spark) and container orchestration (Kubernetes)
  • Proficiency in data streaming in Kubernetes and Kafka
  • Experience with cloud platforms – Azure preferred; AWS or GCP also considered.
  • Solid understanding of CI/CD principles and tools
  • Familiarity with big data technologies such as Hadoop, Hive, HBase, Object Storage (ADLS/S3), Event Queues.
  • Strong understanding of performance optimization techniques such as partitioning, clustering, and caching
  • Proficiency with SQL, key-value datastores, and document stores
  • Familiarity with data architecture and modeling concepts to support efficient data consumption
  • Strong collaboration and communication skills; ability to work across multiple teams and disciplines.