Develop and maintain E2E data pipelines, backend ingestion and participate in the build of Samsara’s Data Platform to enable advanced automation and analytics.
Work with data from a variety of sources including but not limited to: CRM data, Product data, Marketing data, Order flow data, Support ticket volume data.
Manage critical data pipelines to enable our growth initiatives and advanced analytics.
Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with data layers and data lake.
Develop and improve the current data architecture, data quality, monitoring, observability and data availability.
Write data transformations in SQL/Python to generate data products consumed by customer systems and Analytics, Marketing Operations, Sales Operations teams.
Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices.
Requirements
A Bachelor’s degree in computer science, data engineering, data science, information technology, or equivalent engineering program.
8+ years of work experience as a Data Engineer
3+ years of experience in building/maintaining a large-scale production-grade end-to-end data pipelines, including Data Modeling.
Experience with modern cloud-based data-lake and data-warehousing technology stacks, and familiarity with typical data-engineering tools, ETL/ELT, and data-warehousing processes and best practices.
Experience with leading end-to-end projects, including being the central point of contact to stakeholders.
Provide mentorship to junior team members, and provide technical guidance, training, and knowledge-sharing across teams.
Engage directly with internal cross-functional stakeholders to understand their data needs and design scalable solutions.
Experience with the following:
3+ years in Python, SQL.
Exposure to ETL tools such as Fivetran, DBT or equivalent.
API: Exposure to python based API frameworks for data pipelines.
RDBMS: MySQL, AWS RDS/Aurora MySQL, PostgreSQL, Oracle, MS SQL-Server or equivalent.
Cloud: AWS, Azure and/or GCP.
Data warehouse: Databricks, Google Big Query, AWS Redshift, Snowflake or equivalent.
Benefits
Full time employees receive a competitive total compensation package along with employee-led remote and flexible working, health benefits, and much, much more.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
data engineeringdata pipelinesdata modelingSQLPythonETLdata warehousingdata integrationdata transformationdata architecture