Acxiom

Intern, Data Engineer

Acxiom

internship

Posted on:

Location Type: Remote

Location: United States

Visit company website

Explore more

AI Apply
Apply

Job Level

About the role

  • Assist in building and maintaining batch and streaming data pipelines using tools such as Spark, Databricks, Snowflake, and cloud-native services.
  • Support the development of ETL/ELT workflows using orchestration tools like Apache Airflow, dbt, or managed cloud schedulers.
  • Help ingest structured and semi-structured data from sources such as S3, ADLS, GCS, APIs, or Kafka into raw and curated data layers.
  • Write and maintain SQL and Python-based transformations for cleaning, joining, and aggregating datasets.
  • Participate in implementing data quality checks, validation rules, and basic monitoring to ensure data accuracy and reliability.
  • Collaborate with data engineers, analysts, and data scientists to understand how datasets are consumed by analytics models and AI agents.
  • Assist in preparing datasets and feature tables that can be used by AI/ML pipelines or autonomous agents for decision-making and automation.
  • Explore how AI agents can interact with data platforms (e.g., querying data, triggering pipelines, summarizing results) under guidance from senior team members.
  • Contribute to documentation of data flows, schemas, and pipeline logic to support team knowledge sharing.
  • Learn and follow data modeling, governance, and privacy best practices, especially in regulated or privacy-conscious environments.
  • Support version control and deployment processes using Git and basic CI/CD workflows.

Requirements

  • Currently pursuing a Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, Information Systems, or a related field.
  • Basic proficiency in SQL, including simple joins, aggregations, and filtering.
  • Familiarity with Python for scripting, data manipulation, or coursework projects.
  • Introductory understanding of data engineering concepts, such as ETL/ELT, data lakes, and data warehouses.
  • Exposure to at least one cloud platform (AWS, Azure, or GCP) through coursework, labs, or personal projects.
  • Interest in AI, machine learning, or intelligent systems, especially how they depend on high-quality data.
  • Strong willingness to learn, ask questions, and collaborate in a team environment.
  • Clear written and verbal communication skills with attention to detail.
Benefits
  • Internship will begin June 15th, 2026
  • 20-25 hours/week work dedication during the semester and up to 40 hours/week during breaks and summer
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
SQLPythonETLELTdata pipelinesdata quality checksdata modelingdata governancedata privacyCI/CD
Soft Skills
willingness to learncollaborationcommunicationattention to detail