Apply

Ready to go for it?

AI Apply speeds things up—apply directly if you prefer.

Apply faster with JobTailor

Recommended
Apply

Apply your way

Use the standard apply link, or let JobTailor help you move faster.

  • Apply directly in one click
  • No setup required
  • Best if you’re in a hurry
Start AI Apply
Cordial

Data Scientist – Production Engineering

Cordial

. Optimize existing data science models and systems for performance, scalability, and reliability .

Posted 4/20/2026full-timeRemote • 🇺🇸 United StatesMid-LevelSenior💰 $140,000 - $175,000 per yearWebsite

Tech Stack

Tools & technologies
AirflowAWSBigQueryCloudPython

About the role

Key responsibilities & impact
  • Optimize existing data science models and systems for performance, scalability, and reliability
  • Translate research-grade or prototype data science code into production-ready implementations
  • Work with large datasets and improve efficiency related to memory usage, runtime, and compute cost
  • Contribute to and maintain production data pipelines and workflows
  • Collaborate closely with other data scientists to preserve model intent, correctness, and assumptions while improving implementation quality
  • Debug and resolve issues in production or near-production data science workflows
  • Improve robustness, monitoring, and maintainability of deployed models and pipelines
  • Support iterative model improvements and system evolution as business needs change

Requirements

What you’ll need
  • Bachelor’s degree or higher in Data Science, Computer Science, Statistics, Mathematics, or a related quantitative field, plus 3+ years of experience working with real-world, industry, or production data in a data science, applied ML, or analytics role
  • Demonstrated experience contributing to production data science or analytics systems, not only exploratory or academic projects
  • Strong programming skills in Python and experience writing maintainable, production-quality code
  • Experience working with large datasets and performance-sensitive workflows
  • Prior experience with data pipelines and orchestration frameworks (e.g., Dagster, Airflow, etc.)
  • Cloud platform expertise, particularly AWS services (Glue, Athena, ECS, S3 Tables, etc.) for scalable data processing and model deployments
  • Hands-on experience with modern data warehouse solutions (Snowflake, BigQuery, etc.) including query optimization, clustering strategies, and cost management
  • Experience with big data technologies and distributed computing frameworks for handling enterprise-scale event datasets
  • Solid understanding of data science fundamentals, including statistics and modeling concepts, sufficient to work closely with research-oriented data scientists
  • Ability to work independently and ramp up quickly in an existing codebase and system
  • Experience working in small, fast-moving teams where ownership and autonomy are expected

Benefits

Comp & perks
  • Equity and bonus
  • Robust benefit plan (medical/dental/vision/life)
  • 401k match
  • Flexible time off
  • Paid company holidays
  • Annual company conference
  • Childcare reimbursement
  • Continued education yearly reimbursements

ATS Keywords

✓ Tailor your resume
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
data scienceproduction data scienceapplied MLPythondata pipelinesorchestration frameworksAWSdata warehouse solutionsbig data technologiesdistributed computing
Soft Skills
collaborationdebuggingproblem-solvingindependenceadaptabilityownershipautonomy