Conduct original security research on cutting-edge machine learning systems, identifying novel attack vectors including adversarial examples, model poisoning, data extraction attacks, and jailbreaks for large language models and other foundation models.
Work directly with top-tier AI organizations (frontier labs, leading AI companies) to assess the security posture of their most advanced ML systems, providing expertise that matches their internal research capabilities.
Design and build novel security testing frameworks, evaluation methodologies, and open-source tools specifically for AI/ML security research—including adversarial robustness testing, model extraction detection, and automated vulnerability discovery systems.
Develop comprehensive threat models for emerging AI/ML deployment patterns, anticipate future attack vectors, and establish security frameworks that can scale with rapidly evolving AI capabilities.
Publish findings, present at security and AI/ML conferences, and contribute to the broader AI/ML security research discourse through papers, blog posts, and open-source contributions.
Bridge AI/ML research and security engineering, translating complex adversarial AI/ML concepts to diverse stakeholders and working closely with Trail of Bits' broader security research teams.
Requirements
PhD-level expertise (completed, near completion, or equivalent research experience) in machine learning, deep learning, or related fields with demonstrated research contributions.
Strong understanding of adversarial machine learning, including familiarity with attack paradigms such as evasion attacks, poisoning attacks, model inversion, membership inference, backdoor attacks, or prompt injection/jailbreaking techniques.
Extensive hands-on experience with modern ML frameworks (PyTorch, JAX, TensorFlow), transformer architectures, training methodologies, and the full ML development lifecycle from data pipelines to deployment.
Track record of high-quality research demonstrated through publications, preprints, open-source contributions, or other artifacts that the ML community recognizes.
Strong software engineering skills in Python and at least one systems language (C/C++, Rust, or similar), with experience building research prototypes and tooling.
Demonstrated ability to quickly learn new domains, identify security-critical edge cases, and think adversarially about complex systems without needing an explicit application security background.
Ability to distill complex AI/ML security research into clear, actionable recommendations for technical and executive audiences, and present findings to sophisticated clients who are themselves AI/ML experts.
Benefits
Competitive salary complemented by performance-based bonuses.
Fully company-paid insurance packages, including health, dental, vision, disability, and life.
A solid 401(k) plan with a 5% match of your base salary.
20 days of paid vacation with flexibility for more, adhering to jurisdictional regulations.
4 months of parental leave to cherish the arrival of new family members.
$10,000 in relocation assistance to support your transition.
$1,000 Working-from-Home stipend to create a comfortable and productive home office.
Annual $750 Learning & Development stipend for continuous personal and professional growth.
Company-sponsored all-team celebrations, including travel and accommodation, to foster community and recognize achievements.
Philanthropic contribution matching up to $2,000 annually.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.