Salary
💰 $324,000 - $490,000 per year
About the role
- Lead an effort to map, characterize, and prioritize cross-layer vulnerabilities in advanced AI systems – spanning data pipelines, training/inference runtimes, system and supply chain components
- Drive offensive research, produce technical deliverables, and deliver deep-dive reports on vulnerabilities and mitigations for training and inference
- Build an AI Stack Threat Map across the AI lifecycle, from data to deployment
- Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs
- Serve as OpenAI’s primary technical counterpart for select external partners, including potential U.S. government stakeholders
- Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes)
- Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations
- Provide decision-makers a common vulnerability taxonomy, early warning of systemic weaknesses, and a repeatable methodology that raises the bar for adversaries
Requirements
- Current security clearance is not mandatory, but being eligible for sponsorship is required
- Experience leading high-stakes security research programs with external sponsors (e.g., national-security or critical-infrastructure stakeholders)
- Deep experience with cutting edge offensive-security techniques
- Fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and able to threat-model end-to-end
- Hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, control planes)
- Operate independently, align diverse teams, and deliver on tight timelines
- Communicate clearly and concisely with experts and decision-makers
- Willingness to engage external partners and U.S. government stakeholders as primary technical representative