Crafting, scaling, and hardening deep learning infrastructure libraries and frameworks for training on multi-thousand GPU clusters.
Improving efficiency throughout the training stack: data loaders, distributed training, scheduling, and performance monitoring.
Building robust training pipelines and libraries to handle massive video datasets and enable rapid experimentation.
Collaborating with researchers, model engineers, and internal platform teams to enhance efficiency, minimize stalls, and improve training availability.
Owning core infrastructure components such as orchestration libraries, distributed training frameworks, and fault-resilient training systems.
Partnering with leadership to ensure infrastructure scales with growing GPU capacity and dataset size while maintaining developer efficiency and stability.
Requirements
BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, or a related field, or equivalent experience.
12+ years of professional experience building and scaling high-performance distributed systems, ideally in ML, HPC, or large-scale data infrastructure.
Extensive knowledge in deep learning frameworks (PyTorch is preferred), large scale training (DDP/FSDP, NCCL, tensor/pipeline parallelism), and performance profiling.