
LLM Security Evaluation Expert
SilverEdge Government Solutions
full-time
Posted on:
Location Type: Office
Location: Columbia • Maryland • United States
Visit company websiteExplore more
Tech Stack
About the role
- Responsible for rigorously testing security and integrity of Large Language Models (LLMs).
- Design and execute sophisticated adversarial prompt attacks to identify vulnerabilities.
- Assess model's resistance to exploitation and ensure consistent, secure behavior.
Requirements
- TS/SCI with Polygraph level Clearance
- Strong knowledge of how LLMs work, including their architecture, training processes, capabilities, and inherent limitations.
- Familiarity with prominent LLM families (e.g., GPT series, Claude, Llama, PaLM) and their common characteristics.
- Proven experience in crafting and refining prompts to elicit specific behaviors or bypass restrictions in LLMs.
- Demonstrable understanding of techniques like jailbreaking, prompt injection, role-playing attacks, and exploiting model biases.
- Strong understanding of cybersecurity principles and common attack vectors, particularly as they apply to AI/ML systems.
- Ability to think like an attacker and anticipate potential exploits.
- Excellent ability to analyze complex systems, identify subtle vulnerabilities, and systematically test hypotheses.
- Clear and concise written and verbal communication skills, with the ability to document technical findings thoroughly.
- Understanding of the ethical implications of AI security and commitment to responsible testing practices.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
Large Language Modelsadversarial prompt attacksprompt craftingjailbreakingprompt injectionrole-playing attacksexploiting model biasescybersecurity principlesAI/ML systems
Soft Skills
analytical skillscommunication skillsproblem-solvinganticipation of exploits
Certifications
TS/SCI with Polygraph clearance