About the role
- Fine-tune state-of-the-art models and design evaluation frameworks
- Bring AI features into production and ensure models are safe, trustworthy, and impactful at scale
- Run and manage open-source models efficiently, optimizing for cost and reliability
- Ensure high performance and stability across GPU, CPU, and memory resources
- Monitor and troubleshoot model inference to maintain low latency and high throughput
- Collaborate with regional teams across product, engineering, operations, infrastructure and data to build and scale AI solutions
- Collaborate with engineers to implement scalable and reliable model serving solutions
- Prototype, test, and iterate to deliver impactful AI applications
Requirements
- Experience with model serving platforms such as vLLM or Hugging Face TGI
- Proficiency in GPU orchestration using tools like Kubernetes, Ray, Modal, RunPod, LambdaLabs
- Ability to monitor latency, costs, and scale systems efficiently with traffic demands
- Experience setting up inference endpoints for backend engineers
- Flat structure & real ownership
- Full involvement in direction and consensus decision making
- Flexibility in work arrangement
- High-impact role with visibility across product, data, and engineering
- Top-of-market compensation and performance-based bonuses
- Global exposure to product development
- Lots of perks - housing rental subsidies, a quality company cafeteria, and overtime meals
- Health, dental & vision insurance
- Global travel insurance (for you & your dependents)
- Unlimited, flexible time off
ATS Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
model fine-tuningmodel evaluation frameworksAI feature productionmodel optimizationmodel inference monitoringlatency troubleshootingmodel servingprototype testingscalable AI applicationsinference endpoint setup
Soft skills
collaborationcommunication