About the role
- Run and manage open-source models efficiently, optimizing for cost and reliability
- Ensure high performance and stability across GPU, CPU, and memory resources
- Monitor and troubleshoot model inference to maintain low latency and high throughput
- Collaborate with engineers to implement scalable and reliable model serving solutions
Requirements
- Experience with model serving platforms such as vLLM or HuggingFace TGI
- Proficiency in GPU orchestration using tools like Kubernetes, Ray, Modal, RunPod, LambdaLabs
- Ability to monitor latency, costs, and scale systems efficiently with traffic demands
- Experience setting up inference endpoints for backend engineers
- Flat structure & real ownership
- Full involvement in direction and consensus decision making
- Flexibility in work arrangement
- High-impact role with visibility across product, data, and engineering
- Top-of-market compensation and performance-based bonuses
- Global exposure to product development
- Lots of perks - housing rental subsidies, a quality company cafeteria, and overtime meals
- Health, dental & vision insurance
- Global travel insurance (for you & your dependents)
- Unlimited, flexible time off
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard skills
model servingGPU orchestrationinference endpointscost optimizationlatency monitoringhigh throughputscalabilityreliability
Soft skills
collaborationtroubleshooting