Global Payments Inc.

Principal, AI Governance and Tooling

Global Payments Inc.

full-time

Posted on:

Origin:  • 🇺🇸 United States

Visit company website
AI Apply
Manual Apply

Job Level

Lead

About the role

  • Lead the design, development, and execution of independent AI/ML model validation frameworks across various use cases
  • Conduct bias audits, adversarial testing, and stress testing to evaluate model robustness, fairness, and resilience against vulnerabilities
  • Apply statistical testing, benchmarking methodologies, and explainability (XAI) techniques to ensure models are transparent and interpretable
  • Utilize synthetic data generation and automated testing frameworks to simulate edge cases and rare scenarios for risk assessment
  • Document validation methodologies, findings, and risk-based recommendations for stakeholders, ensuring traceability and audit-readiness
  • Develop and implement enterprise AI monitoring frameworks for deployed models, focusing on real-time performance tracking, bias detection, and compliance verification
  • Apply anomaly detection and AI observability solutions to identify and remediate performance degradation, drift, or ethical risks
  • Oversee incident response for AI failures, coordinating with risk, compliance, and engineering teams to ensure timely mitigation
  • Integrate monitoring insights into governance dashboards and reporting platforms to inform executives and regulatory stakeholders
  • Ensure all testing and monitoring activities align with RUAI principles, industry best practices, and applicable regulations (e.g., EU AI Act, GDPR, CCPA, Colorado AI Act, NIST AI RMF)
  • Leverage AI governance platforms and risk assessment tools to centralize validation evidence, compliance records, and ongoing monitoring metrics
  • Partner with Legal, Compliance, and Risk to interpret regulatory requirements and translate them into actionable technical and operational controls
  • Provide expert guidance to data scientists and engineers on bias mitigation, fairness optimization, and explainability best practices
  • Stay informed on emerging trends in AI risk assessment, validation methodologies, monitoring tools, and regulatory developments
  • Lead workshops, training sessions, and cross-functional knowledge sharing to advance organizational maturity in AI testing and monitoring
  • Contribute to enterprise AI governance strategy by identifying technology investments, process enhancements, and automation opportunities
  • Provides guidance and mentoring to analysts, as needed.
  • Not an exhaustive list; other duties as assigned.

Requirements

  • 5+ years in independent AI/ML model testing and validation, including robustness, fairness, and compliance verification.
  • 3–5+ years in AI monitoring and risk management, including real-time model performance tracking, anomaly detection, and compliance monitoring.
  • Proven experience developing and executing rigorous validation frameworks and performing bias, adversarial, and stress testing.
  • Strong knowledge of AI governance principles, ethical AI frameworks, and relevant regulations (EU AI Act, GDPR, CCPA, Colorado AI Act, NIST AI RMF).
  • Hands-on experience with validation tools, statistical testing frameworks, synthetic data generation, automated testing platforms, and AI observability tools.
  • Deep expertise in model validation, fairness audits, and explainability techniques.
  • Proficiency in monitoring and logging frameworks for AI/ML systems.
  • Excellent written and verbal communication skills to document findings, influence stakeholders, and present to executive leadership.
  • Ability to work across diverse teams and translate complex technical concepts into clear operational and compliance guidance.
  • Master\'s Degree in related fields; Preferred Qualifications include Master\'s Degree and experience establishing AI Governance programs, processes, and frameworks for AI model testing and validation, AI solution monitoring, and AI risk management.