Tarro

Data Scientist

Tarro

full-time

Posted on:

Origin:  • 🇺🇸 United States • California

Visit company website
AI Apply
Apply

Job Level

Mid-LevelSenior

Tech Stack

AirflowAmazon RedshiftCloudDockerETLPythonSQL

About the role

  • Drive impactful insights and build data-driven solutions across multiple Tarro product areas
  • Analyze complex datasets and develop predictive models
  • Own end-to-end data science model pipelines, from development to support, and deploy models into production
  • Make key decisions on technologies and tools as the team scales
  • Write high quality code for all parts of data science pipelines
  • Collaborate closely with Product, Engineering, Operations, Marketing, Sales, Customer Success and other stakeholders to define and execute on data diagnosis, prediction, prescriptive and experimentation requirements
  • Create scalable data science models following software and data engineering practices like versioning, CI/CD, workflow orchestration, data ops and ml ops
  • Implement rigorous code reviews and testing guidelines to ensure high quality data science products
  • Create and improve the data science CI/CD pipeline to enable high developer velocity
  • Help improve the life of independent restaurant owners and their customers

Requirements

  • Bachelors in Computer Science or Engineering, Statistics, or equivalent experience
  • 3+ years of ELT/ETL, data exploration, transformation and building production models
  • 3+ years working with cloud databases, such as Snowflake, Redshift or similar
  • Practical experience with statistical testing (p/t/z tests, power analysis), time series, regression/classification, clustering, and NLP
  • Experience with programming languages such as Python and its usage for data processing, making API calls along with Advanced SQL
  • Experience building with at least one of OpenAI/Anthropic/Google/OSS (Llama/Mistral), using embeddings, RAG (vector DB + hybrid search + re-ranking), prompt engineering, function/tool calling, JSON-schema outputs, and evaluation frameworks (e.g., prompt/unit tests, hallucination checks)
  • Exposure to MLflow or equivalent for experiment tracking/model registry; data & model versioning; orchestration (Dagster/Airflow); containerization (Docker) and monitoring (latency, cost, quality)
  • Exceptional product sense, communication and bias to ship
  • Bonus: Experience with dbt, Dagster, MLflow/DVC, Weights & Biases, or Feast/feature stores
  • Bonus: Restaurant tech or marketplace/logistics experience; customer-obsessed and love turning noisy operational data into outcomes