Tech Stack
AWSAzureCyber SecurityPyTorchTensorflow
About the role
- Protect AI models used in toll fraud detection, license plate recognition, and traffic monitoring against adversarial attacks.
- Ensure integrity of training data and inference pipelines to prevent data poisoning or tampering.
- Monitor for model drift, bias, and manipulation risks.
- Identify and respond to threats targeting AI systems (e.g., adversarial inputs, prompt injection, model inversion).
- Develop detection mechanisms for misuse of AI APIs and tolling-related automation systems.
- Collaborate with SOC (Security Operations Center) teams to integrate AI-related alerts.
- Ensure AI systems comply with government regulations, privacy laws, and ethical AI guidelines.
- Implement data security controls for sensitive toll/payment/vehicle data used in AI models.
- Support audits for AI risk management and transparency.
- Work with data science, IT security, and DevOps teams to integrate security controls into AI lifecycle.
- Contribute to AI model governance frameworks and Zero Trust AI adoption.
- Train employees and toll operators on safe and ethical AI usage.
- Stay updated on emerging AI security risks (deepfakes, synthetic fraud, AI-enabled attacks).
- Research and test AI red-teaming methods to find vulnerabilities in toll-related AI models.
- Recommend new AI security tools and frameworks (e.g., adversarial ML defense libraries).
Requirements
- Background in cybersecurity, AI/ML, or data science .
- Knowledge of adversarial machine learning, AI attack vectors, and model security .
- Familiarity with AI platforms (TensorFlow, PyTorch , Azure AI, AWS SageMaker).
- Strong understanding of encryption, IAM, and data protection laws (GDPR, DPDP, PCI-DSS) .
- Certifications (preferred): Certified Ethical Hacker (CEH), GIAC AI Security Essentials (AISE), Microsoft Certified AI Engineer, CISSP with AI focus .