Tech Stack
AWSAzureCloudCyber SecurityDockerGoogle Cloud PlatformKubernetesLinuxNumpyPandasPythonPyTorchScikit-LearnTensorflow
About the role
- Develop AI systems for vulnerability detection and autonomous security testing
- Design and train reinforcement-learning and fine-tuning workflows (e.g., PPO, LoRA/QLoRA) to improve automated security agents
- Automate security testing pipelines with Python and modern ML frameworks
- Contribute to feature development from concept through deployment, including model evaluation and experiment tracking
- Identify and document vulnerabilities in AI-native applications and LLM workflows
- Research emerging AI security threats, prompt-injection tactics, and adversarial ML
- Document findings and help build a technical knowledge base
- Collaborate with experienced engineers and contribute to team mentorship and growth
Requirements
- Bachelor’s degree in Computer Science, Machine Learning, Artificial Intelligence, Data Science, Software Engineering, Mathematics, Statistics, or a related field
- Strong Python skills and comfort with data-science libraries (NumPy, pandas, scikit-learn)
- Foundational knowledge of AI/ML concepts and frameworks (PyTorch, TensorFlow, or JAX)
- Exposure to reinforcement learning or similar methods (deep Q-learning, PPO) through coursework, labs, or projects
- Understanding of model training life cycle: data prep, feature engineering, and evaluation
- Experience with Linux environments and Git version control
- Interest in cybersecurity and ethical hacking
- Strong problem-solving and clear communication skills
- Must be located in or willing to relocate to the Indianapolis metropolitan area
- Legal authorization to work in the United States / questions about sponsorship are required on application
- Hands-on projects fine-tuning LLMs or training LoRA/QLoRA adapters (desired)
- Experience with vector databases or retrieval-augmented generation (RAG) (desired)
- Familiarity with AI security topics (model inversion, adversarial attacks) (desired)
- Internship or academic work in reinforcement learning or autonomous agents (desired)
- Contributions to open-source RL or adversarial-ML projects (desired)
- Experience with Docker and containerized ML ops (Docker/Kubernetes) (desired)
- Experience with cloud platforms (AWS, Azure, or GCP) (desired)
- Security certifications (Security+, etc.) or active pursuit of them (desired)
- Participation in CTF platforms (Hack The Box, TryHackMe, etc.) or AI/ML competitions (Kaggle) (desired)