Salary
💰 $148,000 - $287,500 per year
About the role
- Optimize generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization, speculative decoding, sparsity, distillation, pruning to neural architecture search, and streamlined deployment strategies with open-sourced inference frameworks.
- Design, implement, and productionize model optimization algorithms for inference and deployment on NVIDIA’s latest hardware platforms.
- Focus on ease of use, compute and memory efficiency, and achieving the best accuracy–performance tradeoffs through software–hardware co-design.
- Work across multiple layers of the AI software stack—ranging from algorithm design to integration—within NVIDIA’s ecosystem (TensorRT Model Optimizer, NeMo/Megatron, TensorRT-LLM) and open-source frameworks (PyTorch, Hugging Face, vLLM, SGLang).
- Dive into GPU-level optimization, including custom kernel development with CUDA and Triton, and conduct deep GPU kernel-level profiling to identify optimization opportunities (e.g., efficient attention kernels, KV cache optimization, parallelism strategies).
- Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model-level optimizations, and new features tailored to the latest NVIDIA hardware capabilities.
- Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end-to-end workflows and balanced accuracy-performance trade-offs.
- Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem.
Requirements
- Master’s, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field.
- 5+ years of relevant work or research experience in deep learning.
- Strong software design skills, including debugging, performance analysis, and test development.
- Proficiency in Python, PyTorch, and modern ML frameworks/tools.
- Proven foundation in algorithms and programming fundamentals.
- Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment.
- Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks.
- Hands-on experience training or fine-tuning generative AI models on large-scale GPU clusters.
- Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end-to-end performance.
- Familiarity with NVIDIA’s deep learning SDKs (e.g., TensorRT).
- Experience developing high-performance GPU kernels for machine learning workloads using CUDA, CUTLASS, or Triton.