Conagra Brands

Senior Data Engineer

Conagra Brands

full-time

Posted on:

Location Type: Office

Location: OmahaIllinoisNew YorkUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $93,200 - $135,700 per year

Job Level

About the role

  • Lead the design, development, and maintenance of scalable data pipelines that integrate data from multiple sources into the Enterprise Data Platform.
  • Build pipelines and workflows as code using modern engineering practices including version control, code reviews, automated testing, and reusable components.
  • Define and implement patterns for continuous integration and continuous deployment for data pipelines including automated builds, tests, deployments, and environment promotion.
  • Partner with data scientists, analysts, and business teams to gather requirements and translate them into robust data solutions.
  • Build and optimize Structured Query Language queries and data transformations that support complex business use cases and analytics needs.
  • Design and manage data models and review them with business collaborators, data architects, and governance partners.
  • Establish data quality checks, validation, and troubleshooting practices to maintain accuracy, consistency, and trust in data products.
  • Monitor and optimize pipeline performance and reliability; implement observability including logging, metrics, and alerts; contribute to operational runbooks.
  • Drive automation to increase efficiency and reduce manual processes across platform operations.
  • Provide technical leadership through mentoring, peer reviews, and guidance on engineering standards.
  • Participate in Agile ceremonies to plan, estimate, and deliver work efficiently.
  • Create and maintain documentation for data workflows, transformations, standards, and operational procedures.

Requirements

  • Bachelor’s degree in Computer Science, Information Systems, or a related field, or equivalent experience.
  • Five to eight years of experience in data engineering or a related field.
  • Advanced proficiency in Structured Query Language for complex data transformation and analysis.
  • Hands-on experience with cloud-based data platforms such as Databricks or Snowflake.
  • Experience with ETL and ELT tools or frameworks such as Informatica, Talend, or dbt.
  • Strong proficiency in Python or PySpark for data processing and pipeline development.
  • Strong understanding of data modeling, database design principles, and building curated datasets for analytics and operational use cases.
  • Experience with DevOps practices including Git-based development, branching strategies, and code reviews.
  • Experience implementing continuous integration and continuous deployment for data pipelines and managing deployments across environments.
  • Familiarity with orchestration and workflow tools such as Databricks Workflows or Airflow is preferred.
  • Familiarity with Infrastructure as Code tools including Terraform or CloudFormation and containerization concepts is a plus.
  • Strong problem-solving skills, attention to detail, and skill in troubleshooting complex issues.
  • Strong communication skills and skill collaborating across technical and non-technical teams.
Benefits
  • Comprehensive healthcare plans
  • Wellness incentive program
  • Mental wellbeing support and fitness reimbursement
  • Bonus incentive opportunity
  • Matching 401(k) and stock purchase plan
  • Career development opportunities
  • Employee resource groups
  • On-demand learning and tuition reimbursement
  • Paid-time off
  • Parental leave
  • Flexible work-schedules
  • Volunteer opportunities
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
Structured Query LanguagePythonPySparkETLELTdata modelingdatabase designcontinuous integrationcontinuous deploymentdata transformation
Soft Skills
problem-solvingattention to detailcommunicationcollaborationtechnical leadershipmentoring