Salary
💰 $148,000 - $287,500 per year
Tech Stack
DockerKubernetesLinuxSwitching
About the role
- Working with NVIDIA AI Native customers on data center GPU server and networking infrastructure deployments.
- Guiding customer discussions on network topologies, compute/storage, and supporting the bring-up of server/network/cluster deployments.
- Identifying new project opportunities for NVIDIA products and technology solutions in data center and AI applications.
- Conducting regular technical meetings with customers as a trusted advisor, discussing product roadmaps, cluster debugging, and new technology introductions.
- Building custom demonstrations and proofs of concept to address critical business needs.
- Analyzing and debugging compute/network performance issues.
Requirements
- BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Physics, or related fields, or equivalent experience.
- 5+ years of experience in Solution Engineering or similar roles.
- System-level understanding of server architecture, NICs, Linux, system software, and kernel drivers.
- Practical knowledge of networking - switching & routing for Ethernet/Infiniband, and data center infrastructure (power/cooling).
- Familiarity with DevOps/MLOps technologies such as Docker/containers and Kubernetes.
- Effective time management and ability to balance multiple tasks.
- Excellent communication skills for articulating ideas and code clearly through documents and presentations.