Develop the datasets and models for training and evaluating models and end-to-end systems for Content Safety and ML Fairness.
Research and implement cutting-edge techniques for bias detection and mitigation in LLMs and systems using LLMs like RAGs.
Define and track key metrics for responsible LLM behavior and usage.
Follow the best MLOps practices of automation, monitoring, scale, safety.
Contribute to the MLOps platform and develop safety tools to help ML teams be more effective.
Collaborate with other engineers, data scientists, and researchers to develop and implement solutions to content safety and ML fairness challenges.
Requirements
Master’s or PhD in Computer Science, Electrical Engineering or related field or equivalent experience
3+ years of work experience in developing and deploying machine learning models in production
Strong understanding of machine learning principles and algorithms
Hands-on programming experience in python and in-depth knowledge of machine learning frameworks, like Keras or PyTorch
Experience with one or more of the following broader areas for 2+ years: Content Safety, ML Fairness, Robustness, AI Model Security, or related areas
Background with one or more of the following areas within Content Safety: Hate/Harassment, Sexualized, Harmful/Violent, or other specific areas from your application
Experience working with large multi-modal datasets and multi-modal models
Good at problem solving and analytical ability
Excellent collaboration and communication skills.
Demonstrates behaviors that build trust: humility, transparency, respect, intellectual honesty.
Benefits
Equity
Comprehensive benefits package
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.