We are seeking a Governance & Risk Specialist to establish and oversee the governance frameworks that ensure safe, compliant, and responsible use of AI systems.
Classify risks for AI systems and complete Data Protection Impact Assessments (DPIA) and Privacy Impact Assessments (PIA).
Maintain “agent cards” that record purpose, limitations, lineage, and performance characteristics of deployed AI agents.
Map organizational practices to regulatory and industry frameworks (NIST AI RMF, ISO standards, EU AI Act, GDPR, etc.).
Operate governance checkpoints across the AI lifecycle, ensuring audit trails for key decisions.
Require joint human+agent metrics and evidence of adoption to validate responsible use.
Ensure contracts with vendors include appropriate language to limit ownership and liability for AI-related incidents.
Partner with engineering, security, and legal teams to refine governance practices as AI technologies and regulations evolve.
Requirements
Experience in risk management, governance, compliance, or trust & safety in technology or AI-driven environments.
Strong knowledge of data protection frameworks (GDPR, CCPA, HIPAA, etc.) and risk assessment processes (DPIA/PIA).
Familiarity with AI governance standards (NIST AI RMF, ISO/IEC AI standards, EU AI Act).
Demonstrated ability to run governance gates, audits, or compliance checkpoints in a technical or regulatory context.
Skilled in producing clear, defensible documentation for both technical and executive stakeholders.
Bonus: Experience with AI policy, vendor risk management, or responsible AI practices.