Bumble Inc.

Integrity Operations Manager, Sensitive Content

Bumble Inc.

full-time

Posted on:

Location Type: Hybrid

Location: LondonUnited Kingdom

Visit company website

Explore more

AI Apply
Apply

Salary

💰 £50,000 - £60,000 per year

Tech Stack

About the role

  • Lead day-to-day operations for the Sensitive Content pillar, ensuring accurate, timely, and policy-aligned image classification outcomes that reduce harm and protect member experience.
  • Own end-to-end BPO and AI moderation vendor governance, including SLA definition, performance management, quality assurance frameworks, and structured business reviews that drive continuous improvement.
  • Translate sensitive content policies and taxonomy updates into clear annotation guidelines, decision trees, and workflow documentation; run calibration sessions and inter-rater alignment exercises to strengthen consistency.
  • Design and evolve quality measurement frameworks, including sampling strategies, error trend analysis, reviewer accuracy tracking, and root-cause insights that inform targeted training plans.
  • Partner cross-functionally with Policy, Product, Engineering, and Machine Learning teams to improve moderation tooling, classifier performance feedback loops, and pipeline design — demonstrating an agile mindset as systems evolve.
  • Coordinate special labeling initiatives (e.g., new harm typologies, taxonomy refinements, model retraining datasets), taking ownership from insight to impact with defined success metrics and clear timelines.
  • Build and communicate operational reporting across quality, throughput, backlog health, escalation volumes, and cost efficiency — transforming data into clear narratives and actionable recommendations.
  • Model calm, values-led decision-making when managing high-sensitivity escalations, balancing speed, risk, and member impact while upholding Bumble’s values of Respect and Excellence.

Requirements

  • Typically requires 4–6 years of experience, though we welcome candidates with alternative backgrounds that demonstrate equivalent skills.
  • Experience leading large-scale vendor or BPO moderation operations, including SLA management, structured QA programs, governance cadences, and distributed team performance oversight.
  • Strong working knowledge of Trust & Safety policy taxonomies and demonstrated experience operationalizing them into labeling schemas, annotation standards, and moderation workflows.
  • Hands-on experience supporting AI/ML-driven safety systems, including Human-in-the-Loop review design, dataset quality controls, calibration methodologies, and feedback loops for model improvement.
  • Comfort with operational data analysis, including building reporting dashboards, conducting trend and variance analysis, identifying error themes, and presenting insights clearly; SQL proficiency is a strong plus.
  • Demonstrated ability to collaborate with purpose across Policy, Product, Engineering, QA/Learning & Development, and external vendors — while taking ownership for delivery and outcomes.
  • Strong problem-solving judgment under ambiguity, with the ability to see things through from insight to measurable impact and adapt quickly as harm patterns evolve.
  • Thoughtful AI fluency: you understand where automation accelerates harm detection, where human judgment is essential, and how to continuously strengthen HITL systems without compromising fairness or member trust.
  • A values-driven operator who fosters psychologically safe ways of working, demonstrates Curiosity when evaluating edge cases, and upholds Respect when navigating sensitive subject matter.
Applicant Tracking System Keywords

Tip: use these terms in your resume and cover letter to boost ATS matches.

Hard Skills & Tools
image classificationBPO operationsAI moderationSLA managementquality assurancedata analysisSQLHuman-in-the-Loop reviewcalibration methodologiesreporting dashboards
Soft Skills
leadershipcollaborationproblem-solvingdecision-makingadaptabilitycommunicationcuriosityvalues-driven operationcalmness under pressurepsychological safety