About the role
- Fine-tune state-of-the-art models and design evaluation frameworks
- Bring AI features into production and ensure models are safe, trustworthy, and impactful at scale
- Run and manage open-source models efficiently, optimizing for cost and reliability
- Ensure high performance and stability across GPU, CPU, and memory resources
- Monitor and troubleshoot model inference to maintain low latency and high throughput
- Collaborate with product, engineering, operations, infrastructure and data teams to build and scale AI solutions
- Implement scalable and reliable model serving and inference endpoints for backend engineers
Requirements
- Experience with model serving platforms such as vLLM or Hugging Face TGI
- Proficiency in GPU orchestration using tools like Kubernetes, Ray, Modal, RunPod, LambdaLabs
- Ability to monitor latency, costs, and scale systems efficiently with traffic demands
- Experience setting up inference endpoints for backend engineers
- Experience optimizing for cost and reliability when running open-source models
- Flat structure & real ownership
- Full involvement in direction and consensus decision making
- Flexibility in work arrangement
- High-impact role with visibility across product, data, and engineering
- Top-of-market compensation and performance-based bonuses
- Global exposure to product development
- Lots of perks - housing rental subsidies, a quality company cafeteria, and overtime meals
- Health, dental & vision insurance
- Global travel insurance (for you & your dependents)
- Unlimited, flexible time off
ATS Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
model fine-tuningmodel evaluation frameworksmodel servingGPU orchestrationinference endpointscost optimizationreliability optimizationlatency monitoringhigh throughputopen-source models
Soft skills
collaborationtroubleshootingcommunication