Develop the strategy and direct the execution of a portfolio of concurrent assurance activities relating to non AI model objects development, validation and deployment , of high quality, within budget, and within the expected timelines.
Act as a trusted subject matter experts on non AI model risks and controls in financial services across the IA team.
Partner with IA Model Risk team to scale AI assurance across IA work programs.
Evaluate AI use cases (design, prompts, embeddings, performance) against technical standards, guidelines, and pipeline integrity.
Assessing AI/GenAI use case lifecycle management across development, validation, deployment, monitoring, and retirement.
Evaluate risk dimensions including bias, fairness, robustness, explainability, and transparency for non-AI model objects.
Assess guardrail design and effectiveness to mitigate hallucinations, prompt injection, and unsafe outputs.
Provide technical assurance input into model-related regulatory reviews and compliance assessments.
Ensure timely delivery of comprehensive regulatory and internal audit issue validation, including issues arising from other external parties.
Evaluate the adequacy and effectiveness of internal AI governance policies, adherence to AI/GenAI related laws and regulations, and the robustness of AI-related ethical or compliance incident response frameworks.
Evaluate testing methods and approaches, assess adequacy and completeness of tests given the specific needs for GenAI systems and other non model AI objects.
Evaluate functional red-teaming approaches for AI use-cases and non-model objects.
Attract, develop, and retain talent; recommending staffing levels required to fulfil responsibilities effectively while establishing and adhering to talent management processes and compensation and performance management programs.
Work across IA to maximize the efficiency and effectiveness of resources.
Provide local site leadership encouraging engagement and collaboration across all IA organizations.
Perform control environment assessments and identify thematic issues and trends to improve controls and governance over the identification, measurement, management, and reporting of risks.
Champion horizon scanning for emerging methods and approaches for GenAI system risk management.
Promote auditability by design by advising product and engineering teams on controls, documentation, and governance structure needed to make AI systems inherently auditable.
Shape the future of AI audit practices by contributing to the development of new assurance methods for emerging risks (systemic bias propagation, synthetic data misuse, model drift).
Support and endorse the Quality Assurance (QA) function of IA, and resolve issues found by QA, improving audit processes and coverage.
Represent IA in relevant governance forums including Boards and Committees.
Develop and maintain strong working relationships across other IA teams, Management Stakeholders (MSs), and Regulators.
Build trust through proactive and transparent interactions while maintaining strong relationships with regulators aligned to Internal Audit.
Engage regularly with MSs to understand emerging risks and help address escalations.
Pro-actively advise and assist MSs in considering risks and controls.
Support the audit team in their interactions with MSs.
Participate and share relevant knowledge and insight, on an ongoing basis, within and across IA teams, to drive the consistent application of the methodology, consistency of audit opinions, higher efficiency, and improved quality of overall assurance.
Requirements
Extensive experience in model audit, validation, or model risk management within financial services.
Experience of LLMs, GenAI validation and testing methods
Understanding of financial regulations and how they intersect with AI/GenAI e.g. conduct risk, operational resilience, data protection, algorithmic trading, AML/KYC, consumer protection.
Preferably with audit experience, subject matter expert in model validation and governance.
Related certifications such as Certified Information Systems (CISA), Certified Internal Auditor (CIA), Certified in Risk and Information Systems (CRISC) or similar.
AI-related certifications are a plus.
Bachelor's degree/University degree in decision science, computer science, data science, mathematics, statistics or a related field, or equivalent experience.
Master’s degree preferred.
Benefits
medical, dental & vision coverage
401(k)
life, accident, and disability insurance
wellness programs
paid time off packages (planned time off, unplanned time off, paid holidays)
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Certified Information Systems Auditor (CISA)Certified Internal Auditor (CIA)Certified in Risk and Information Systems Control (CRISC)AI-related certifications