
Research Engineer/Scientist – Human Alignment, Consumer Devices
OpenAI
full-time
Posted on:
Location Type: Hybrid
Location: San Francisco • California • United States
Visit company websiteExplore more
Salary
💰 $380,000 - $445,000 per year
About the role
- Develop RLHF and post-training methods for multimodal models.
- Build reward models and preference-learning pipelines for adaptive, personalized model behavior.
- Design datasets, rubrics, and evaluation frameworks that capture user preferences, contextual appropriateness, and long-term value in realistic tasks.
- Run experiments on policy improvement using explicit feedback, implicit signals, and model-based grading.
- Work on long-horizon evaluation problems, where model quality depends not just on a single response but on whether behavior improves outcomes over time.
- Collaborate closely with safety researchers to ensure that adaptation and personalization remain aligned, interpretable, and bounded by clear constraints.
- Prototype and iterate quickly on training recipes, reward formulations, data pipelines, and evaluation suites for product-relevant behaviors.
- Help define how OpenAI measures success for personalized AI systems including trust, appropriateness, and long-term user benefit.
Requirements
- Have a strong background in machine learning research, with experience in RLHF, reward modeling, preference optimization, or post-training for large models.
- Have worked on one or more of: reinforcement learning, ranking, recommender systems, personalization, memory, or human-in-the-loop evaluation.
- Care about rigorous empirical work and know how to design clean experiments, reliable evals, and decision-useful metrics.
- Are excited by the challenge of training models against nuanced behavioral objectives.
- Have experience building datasets or eval pipelines grounded in human preferences, rubrics, or real-world product behavior.
- Are comfortable working across the stack, from data generation and labeling strategy to training runs, reward functions, and analysis.
- Are interested in multimodal AI and in how models can learn from richer interaction signals over time.
- Want to work on product-shaping research with unusually high stakes for trust, alignment, and long-term user value.
- Enjoy close collaboration with engineers, designers, and safety researchers to turn frontier research into real systems.
Benefits
- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
- 401(k) retirement plan with employer match
- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
- Mental health and wellness support
- Employer-paid basic life and disability coverage
- Annual learning and development stipend to fuel your professional growth
- Daily meals in our offices, and meal delivery credits as eligible
- Relocation support for eligible employees
- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.
Hard Skills & Tools
reinforcement learningreward modelingpreference optimizationmultimodal modelspolicy improvementdataset designevaluation frameworkshuman-in-the-loop evaluationempirical researchdata pipelines
Soft Skills
collaborationcommunicationproblem-solvingadaptabilityattention to detailcritical thinkingcreativityuser-centered designinterpersonal skillsrigorous experimentation