Building and maintaining production data pipelines for analytics across the business
Developing tooling and solutions for data practitioners across the company using a deep understanding of their objectives and problems
Improving observability and maintainability in our data platform, including uptime, usage, data quality, and data freshness
Committed to improving processes throughout the data environment via automation
Creating, implementing, and improving standards for production-worthy data flows
Create and maintain documentation on processes, policies, application configuration and help-related materials as applications are developed
Requirements
5+ years of taking a multidisciplinary approach to data operations: we emphasize picking the right tool for the job
5+ years of experience with cloud data technologies (Snowflake preferred) and data modeling standard methodologies.
3+ years of experience automating a data platform with scripting tools (i.e. Python (preferred) Powershell, Bash)
3+ years of experience with advanced orchestration tools (i.e. Airflow (preferred), Prefect, Dagster)
3+ years of experience with tools for ingesting data (i.e. Fivetran (preferred), Stitch, Airbyte, Portable.io) and reverse ETL tools (i.e. Census, Hightouch).
5+ years of experience developing, operating and maintaining data pipelines utilizing open source dbt core.
5+ years of experience working in a cloud environment (AWS preferred) including technologies such as IAM, Lambda, RDS, AppFlow, Glue, EC2, and S3 among others
Benefits
competitive compensation packages
medical coverage
unlimited PTO
wellness reimbursements
Pluralsight subscription
professional development funds and more
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data pipelinesdata modelingautomationscriptingorchestrationdata ingestionreverse ETLdbt corecloud technologiesdata quality