Jellyvision

Data Engineer

Jellyvision

full-time

Posted on:

Location Type: Hybrid

Location: ChicagoCaliforniaColoradoUnited States

Visit company website

Explore more

AI Apply
Apply

Salary

💰 $95,000 - $110,000 per year

About the role

  • Build and maintain data pipelines and storage
  • Build, test, and deploy Airflow DAGs in MWAA following team standards
  • Write clean, production-ready Parquet files using PyArrow or Spark with proper compression and partitioning
  • Implement ETL/ELT pipelines from source → landing → transformed layers
  • Help maintain S3 storage structures following existing partitioning, lifecycle, and access policies
  • Assist with dimensional modeling—turn normalized data into star-schema facts and dimensions under senior guidance
  • Implement basic data quality checks using Python
  • Troubleshoot failing DAGs, rerun backfills, and respond to alerts
  • Participate in on-call support rotation
  • Update and maintain pipeline documentation
  • Learn, collaborate, and help grow our data platform
  • Participate in code reviews to learn best practices and improve code quality
  • Partner with Senior Data Engineers and Analytics Engineers on architecture alignment
  • Work with analytics and product teams to understand and deliver data requirements
  • Document troubleshooting procedures and platform patterns
  • Contribute to sprint planning and technical discussions

Requirements

  • 2+ years of practical experience (professional, internship, and bootcamps count)
  • Solid Python and working SQL proficiency
  • Familiarity with cloud platforms (AWS/GCP/Azure) and modern data tooling (e.g Airflow, dbt, Spark, Snowflake, Databricks, Redshift)
  • Understanding of data modeling concepts (e.g., star schema, normalization) and ETL/ELT design practices
  • Experience reading/writing Parquet
  • Ability to write and run basic Airflow DAGs
  • Docker fundamentals
  • Git fundamentals, agile development and CI/CD practices
  • Demonstrated curiosity
  • Genuine tinkerer energy: you’ve built personal data projects for fun, tried random tools (Polars, DuckDB, Ollama, local LLMs, MotherDuck, etc.), and probably have a messy but awesome docker-compose.yml on your machine.
  • Nice to Have:
  • Experience with AI coding assistants
  • Exposure to AI tools (MCP servers, Ollama, local LLMs)
  • Knowledge of Delta Lake or Snowflake clustering
  • Basic Terraform experience
  • Simple data-quality tooling
Benefits
  • Check out our benefits here!

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
PythonSQLAirflowETLELTParquetDockerGitdata modelingdata quality checks
Soft skills
curiositycollaborationtroubleshootingcommunicationparticipation in code reviewssprint planningtechnical discussions