Cargill

Data Engineer

Cargill

full-time

Posted on:

Location Type: Hybrid

Location: AtlantaUnited States

Visit company website

Explore more

AI Apply
Apply

About the role

  • Designs, builds and maintains moderately complex data systems
  • Develops moderately complex data products and solutions using advanced data engineering and cloud based technologies
  • Maintains and supports the development of streaming and batch data pipelines
  • Reviews existing data systems and architectures
  • Helps prepare data infrastructure to support the efficient storage and retrieval of data
  • Implements automated deployment pipelines to improve efficiency of code deployments
  • Performs moderately complex data modeling aligned with the datastore technology

Requirements

  • Minimum requirement of 2 years of relevant work experience
  • Familiarity with major cloud platforms (AWS, GCP, Azure)
  • Experience with modern data architectures, including data lakes, data lakehouses, and data hubs
  • Proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet)
  • Knowledge of streaming architectures and tools (Kafka, Flink)
  • Strong background in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow)
  • Familiarity with using Spark for data transformation
  • Proficient with programming in Python, Java, Scala, or similar languages
  • Expert-level proficiency in SQL for data manipulation and optimization
  • Understanding of data governance principles
Benefits
  • Health insurance
  • Retirement plans
  • Paid time off
  • Flexible work arrangements
  • Professional development

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data engineeringdata modelingdata transformationSQLPythonJavaScalaKafkaAWS GlueAirflow