NVIDIA

Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

NVIDIA

full-time

Posted on:

Location Type: Remote

Location: Remote • California, Massachusetts, Washington • 🇺🇸 United States

Visit company website
AI Apply
Apply

Salary

💰 $272,000 - $425,500 per year

Job Level

Lead

Tech Stack

CloudDistributed SystemsOpen SourcePython

About the role

  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage to support large-scale LLM inference
  • Architect and implement deep integrations with leading LLM serving engines (such as vLLM, SGLang, TensorRT-LLM), with a focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters
  • Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage (GPU, CPU, local disk, and remote memory) for high-throughput, low-latency inference
  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low-latency KV-cache access and sharing across heterogeneous accelerators and memory pools
  • Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer-facing technical deep dives)

Requirements

  • Masters or PhD or equivalent experience
  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services
  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency
  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency
  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput
  • Excellent communication skills and prior experience leading cross-functional efforts with research, product, and customer teams.
Benefits
  • Equity
  • Benefits 📊 Check your resume score for this job Improve your chances of getting an interview by checking your resume score before you apply. Check Resume Score

Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard skills
CC++Pythondistributed systemshigh-performance storageML systems infrastructurememory hierarchiesdistributed cachingkey-value systemsprofiling and optimizing systems
Soft skills
mentoringtechnical directioncommunicationcross-functional leadership
Certifications
MastersPhD