help build data integration pipelines that feed into our data lakes and warehouses
maintain data quality and integrity in our data stores
collaborate effectively but can also work independently
work closely with key stakeholders and understand and implement business requirements
develop Looker Views and Models to democratize access to data products
keep up with new technologies and approaches to solve problems and improve existing systems
Requirements
Bachelor's Degree preferred in a quantitative field, such as computer science, statistics, mathematics, engineering, data science, or equivalent experience
5+ years of experience in building and optimizing data pipelines with Python
experience writing complex SQL queries to analyze data
experience with at least one cloud service platform (GCP and AWS preferred)
familiarity with data streaming architectures using technologies like Pub/Sub and Apache Kafka
Benefits
medical, dental, vision, prescription drug coverage
unlimited paid time off (PTO)
adoption or surrogate assistance
donation matching
tuition reimbursement
basic life insurance
basic accidental death & dismemberment
supplemental life insurance
supplemental accident insurance
commuter benefits
short term and long term disability
health savings and flexible spending accounts
family care benefits
a generous 401K savings plan with a company match program
10-12 paid holidays annually
generous paid parental leave (birthing and non-birthing parents)
voluntary benefits such as pet insurance, accident, critical and hospital indemnity health insurance coverage, life and disability insurance
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data integrationdata pipelinesdata qualitydata integrityPythonSQLcloud service platformGCPAWSdata streaming architectures