Tech Stack
AWSAzureCloudGoogle Cloud PlatformPythonPyTorchScikit-LearnTensorflow
About the role
- Design and conduct Red Team operations focused on AI/ML systems, including adversarial input testing, model inversion, poisoning, and evasion attacks
- Emulate threat actors targeting AI infrastructure (e.g., model hosting platforms, training data lakes, inference APIs)
- Develop custom tooling and payloads for AI-specific attack scenarios, including adversarial examples and synthetic data manipulation
- Collaborate with data scientists and ML engineers to assess model robustness, fairness, and explainability under adversarial conditions
- Lead Purple Team exercises to validate detection and response capabilities for AI-related threats
- Produce detailed reports and executive briefings outlining risks, attack paths, and remediation strategies
- Contribute to threat modeling and detection engineering for AI systems
- Stay current on emerging AI threats, adversarial ML research, and regulatory implications
- Mentor junior team members and foster a culture of offensive innovation
Requirements
- Proven experience in Red Team operations, adversary emulation, or advanced penetration testing
- Deep understanding of AI/ML concepts including supervised/unsupervised learning, NLP, computer vision, and reinforcement learning
- Experience with adversarial ML techniques (e.g., FGSM, PGD, DeepFool, model extraction)
- Strong proficiency in Python and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn)
- Familiarity with cloud platforms (AWS, Azure, GCP) and their AI services
- Knowledge of MITRE ATLAS, MITRE ATT&CK, and NIST AI RMF
- Experience with offensive tooling and scripting for automation and exploit development
- Excellent communication skills for technical and executive audiences
- Experience mentoring junior team members
- Sponsorship not offered for this position