Customer benchmarking & coordination: Be the technical point-of-contact during trials, running both standardized and custom benchmarks to prove our value
Design and run performance tests: Design, run, and analyze benchmarks across customer workloads — including comparisons against AWS, Lambda, and others
Debug and optimize customer trials: Diagnose performance issues (GPU utilization, NCCL setup, container configs) and recommend fixes
Reporting & documentation: Package results into clear, credible reports and handoffs that make technical findings easy to act on
Maintain benchmarking infrastructure: Own and maintain the scripts, containers, and environments used to validate performance across SKUs and setups
Continuous iteration: Identify performance gaps, optimize cluster configs, and work with supply and engineering to close the loop
Requirements
Experience running infra performance tests or ML model benchmarks (training, inference, or both)
Strong knowledge of GPU cloud infra — how workloads run, what bottlenecks to watch for, and how configs affect performance
Clear and fast written communication (reports, docs, handoffs)
Ability to juggle multiple trials/projects at once
Familiarity with the landscape (AWS, Lambda, CoreWeave, Runpod, etc.)
Bonus: Prior customer-facing experience in a startup or devtools setting
Bonus: Background as an ML engineer, solutions architect, or technical account manager