Responsible AI Data Scientist role focused on adversarial testing and ethical red teaming
Design, prototype, and implement adversarial testing strategies
Mentor and guide teams on adversarial testing processes
Integrate adversarial testing frameworks into AI development lifecycle
Develop detection models, safety guardrails, and proactive risk mitigation measures
Research and implement techniques for AI safety and robustness
Write clean, efficient, well-documented Python code to support research
Maintain reusable code modules for adversarial testing
Scope, document, and implement tests, and apply mitigations
Test for technical vulnerabilities, model vulnerabilities, and harm/abuse including bias, toxicity, and inaccuracy
Participate in labeling test data and writing reports on testing outcomes
Monitor emerging threats and continuously improve safety measures
Requirements
3-5 years of industry experience in Software Engineering, AI ethics, AI research, Applied research, ML, DS, or similar roles
Demonstrated ability to think adversarially, ability to anticipate how malicious actors might misuse AI systems and develop corresponding test scenarios
Experience creating heuristic-based detection logic and rules for identifying anomalous or suspicious activity in production systems and networks
Experience with problem-solving and troubleshooting sophisticated issues with an emphasis on root-cause analysis
Experience in analyzing sophisticated, large-scale data sets and communicating findings to technical and non-technical audiences
Proven organizational and execution skills within a fast-paced, multi-stakeholder environment
Experience working in a technical environment with a broad, cross-functional team
Excellent written and oral communication skills
Experience using SQL and relational databases
Ability to use Python, R, or other scripting languages to perform data analysis at scale