PlanetArt

Data Engineer

PlanetArt

full-time

Posted on:

Location Type: Hybrid

Location: Calabasas • California • 🇺🇸 United States

Visit company website
AI Apply
Apply

Salary

💰 $92,000 - $100,000 per year

Job Level

Mid-LevelSenior

Tech Stack

AirflowAmazon RedshiftApacheAWSCloudDistributed SystemsKafkaMySQLPySparkPythonSparkSQLTableau

About the role

  • Creating data pipelines using extract, transform and load processes to streamline and manage data from different sources and make it understandable, accessible and usable.
  • Optimizing database systems for performance and integrating types of databases, warehouses and analytical systems.
  • Implementing algorithms for data transformation and use and machine learning models to make data useful.
  • Partner with cross-functional teams (Data Science, Sales, Finance, Supply Chain, Marketing, Operations) particularly to align forecasts with strategic and operational planning.
  • Identify key risks and opportunities within forecast assumptions and present recommendations to leadership.
  • Monitor deployed forecast model performance, track variances, and continuously refine methodologies.
  • Troubleshooting data-related issues by identifying the main cause and fixing common issues in data pipelines.
  • Design and deliver recurring reports, dashboards, and presentations for senior leadership.
  • Mentor and provide guidance to junior analysts on best practices in analytics.

Requirements

  • 3+ years of data engineering experience designing and developing large data pipelines using Apache Kafka, Apache Spark, and workflow management tools (Airflow) required.
  • 2+ years of experience with data warehousing and data lake management including Amazon Redshift, and AWS S3 data lakes required.
  • 2+ years of experience working with cloud-based services like AWS Glue, serverless computing with AWS Lambda, and using Airflow on AWS required.
  • 3+ years of experience using analytic SQL, working with traditional relational databases and/or distributed systems (Amazon S3, Redshift, MySQL, SQLServer), required.
  • 3+ years of experience programming languages (e.g. Python, Pyspark, Spark) required.
  • 2+ years of experience of data visualization with Tableau and PowerBI dashboards and reports required.
  • Strong understanding of data modeling principles including Dimensional modeling, data normalization principles.
  • Ability to independently perform performance optimization and troubleshooting for data pipelines and ML deployments.
  • Strong communication skills – written and verbal presentations.
  • Excellent conceptual and analytical reasoning competencies.
  • Comfortable working in a fast-paced and highly collaborative environment.
  • Familiarity with Agile Scrum principles and Atlassian software (Jira, Confluence).
Benefits
  • Health, Dental, and Vision Insurance
  • Life Insurance
  • Mental Health Benefits
  • Pet Insurance
  • 401(k) with matching

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
data engineeringdata pipelinesApache KafkaApache SparkAirflowdata warehousingAmazon RedshiftAWS S3analytic SQLPython
Soft skills
communication skillsanalytical reasoningmentoringcollaborationtroubleshootingperformance optimizationpresentation skillsindependent workproblem-solvingstrategic planning