Own data pipelines: Design, build, and maintain reliable ETL/ELT pipelines to ensure the right data gets to the right place, fast and accurately.
Build the data foundation: Architect resilient, maintainable dbt models and schemas that serve as the trusted source of truth across the business.
Shape the modern data stack: Help evolve our tools, infrastructure, and architecture to meet the needs of a rapidly scaling AI startup.
Bridge science and business: Partner with Leadership, growth, and GTM teams to proactively build datasets and models that unlock experiments, insights, and product features.
Improve quality and discoverability: Establish best practices for data validation, governance, documentation, and metric definitions to drive consistency and trust.
Enable self-service: Create intuitive data products and tools that empower non-technical teams to answer questions and make decisions independently.
Push the frontier: Experiment with emerging AI/ML and data tools, integrating them into our workflows and shaping the future of data at Tavus.
Requirements
5+ years of experience in analytics engineering, data engineering, or adjacent roles
Expertise in SQL, Python, and cloud infrastructure (AWS preferred)
Hands-on experience with Metabase, Orb, Salesforce, Snowflake, and ETL tools
Strong statistical and data science skills (experimentation, modeling, forecasting, ML workflows)
A builder’s mindset: you care about scalability, reusability, and speed of iteration
Intellectual curiosity and resourcefulness: you love digging into problems and finding elegant, pragmatic solutions
A track record of bias to action: you don’t just analyze, you drive change and impact
Excitement about working in a fast-paced, research-driven environment where the boundaries of human–machine interaction are being rewritten
Bonus points if you’re using AI in your current workflows
Strongly preferred: experience working with Salesforce (CRM and revenue data), Orb (usage & billing), and Stripe (payments & financial data)