Cargill

Senior Data Engineer

Cargill

full-time

Posted on:

Location Type: Office

Location: Atlanta • 🇺🇸 United States

Visit company website
AI Apply
Apply

Job Level

Senior

Tech Stack

AirflowAWSAzureCloudGoogle Cloud PlatformJavaKafkaPythonScalaSparkSQL

About the role

  • Designs, builds and maintains complex data systems that enable data analysis and reporting.
  • Ensures that large sets of data are efficiently processed and made accessible for decision making.
  • Prepares data infrastructure to support the efficient storage and retrieval of data.
  • Examines and resolves appropriate data formats to improve data usability and accessibility across the organization.
  • Develops complex data products and solutions using advanced engineering and cloud-based technologies, ensuring they are designed and built to be scalable, sustainable and robust.
  • Develops and maintains streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
  • Reviews existing data systems and architectures to identify areas for improvement and optimization.
  • Collaborates with multi-functional data and advanced analytic teams to gain requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
  • Builds complex prototypes to test new concepts and implements data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
  • Develops automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
  • Performs complex data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.

Requirements

  • Minimum requirement of 4 years of relevant work experience
  • Typically reflects 5 years or more of relevant experience.
  • Experience developing data systems on major cloud platforms (AWS, GCP, Azure).
  • Hands-on experience building modern data architectures, including data lakes, data lakehouses, and data hubs, along with related capabilities such as ingestion, governance, modeling, and observability.
  • Demonstrated proficiency in data collection, ingestion tools (Kafka, AWS Glue), and storage formats (Iceberg, Parquet).
  • Experience developing data pipelines with streaming architectures and tools (Kafka, Flink).
  • Expertise in data transformation and modeling using SQL-based frameworks and orchestration tools (dbt, AWS Glue, Airflow).
  • Deep experience with modeling concepts like SCD and schema evolution.
  • Strong background with using Spark for data transformation, including streaming, performance tuning, and debugging with Spark UI.
  • Advanced programming skills in Python, Java, Scala, or similar languages.
  • Expert-level proficiency in SQL for data manipulation and optimization.
  • Demonstrated experience in DevOps practices, including code management, CI/CD, and deployment strategies.
  • Strong background in data governance principles, including data quality, privacy, and security considerations for data product development and consumption.
Benefits
  • N/A 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data systemsdata analysisdata infrastructuredata pipelinesdata modelingdata transformationprogramming in Pythonprogramming in Javaprogramming in ScalaSQL
Soft skills
collaborationproblem-solvingcommunicationorganizational skillsanalytical thinking