OpenAI logo

Research Engineer/Scientist - Human Alignment, Consumer Devices

Posted 9 hours ago

Apply Now

Job Description

About the Team

The Future of Computing Research team is an applied research team within the Consumer Devices group focused on developing new methods, models, and evaluation frameworks that support our vision for the future of computing. We work at the frontier of multimodal AI, helping turn emerging model capabilities into product experiences that are useful, delightful, and worthy of long-term trust.

Our work explores a new class of AI systems that can learn over time, adapt to individuals, and support people in the flow of daily life. This includes long-term memory, user modeling, and personalization systems that are aligned not just with immediate satisfaction, but with a person’s broader goals, values, and well-being.

We work closely across research, engineering, design, product, and safety to define what it means to build AI systems that know you over time, act at the right moment, and help in ways that are context-aware, respectful, and demonstrably beneficial.

About the Role

We are looking for a Research Engineer / Scientist to join the Future of Computing Research team to work on RLHF and post-training for personalized, multimodal AI systems.

This role will focus on building the learning and evaluation foundations that help models become more context-aware, adaptive, and useful over time. You will work on problems such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that must make high-quality behavioral decisions in realistic user settings. The work is deeply product-grounded: success is not just higher benchmark performance, but better model behavior in real-world use.

The ideal candidate is excited about pushing beyond one-turn assistant behavior toward systems that improve through feedback, learn from richer signals, and are trained against meaningful notions of user value. Internally, that maps closely to the need for careful reward design, feedback loops, and evaluation frameworks that test whether interventions are actually beneficial over longer horizons.

This role is based in San Francisco, CA. We use a hybrid work model of four days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Develop RLHF and post-training methods for multimodal models.

  • Build reward models and preference-learning pipelines for adaptive, personalized model behavior.

  • Design datasets, rubrics, and evaluation frameworks that capture user preferences, contextual appropriateness, and long-term value in realistic tasks.

  • Run experiments on policy improvement using explicit feedback, implicit signals, and model-based grading.

  • Work on long-horizon evaluation problems, where model quality depends not just on a single response but on whether behavior improves outcomes over time.

  • Collaborate closely with safety researchers to ensure that adaptation and personalization remain aligned, interpretable, and bounded by clear constraints.

  • Prototype and iterate quickly on training recipes, reward formulations, data pipelines, and evaluation suites for product-relevant behaviors.

  • Help define how OpenAI measures success for personalized AI systems including trust, appropriateness, and long-term user benefit.

You might thrive in this role if you:

  • Have a strong background in machine learning research, with experience in RLHF, reward modeling, preference optimization, or post-training for large models.

  • Have worked on one or more of: reinforcement learning, ranking, recommender systems, personalization, memory, or human-in-the-loop evaluation.

  • Care about rigorous empirical work and know how to design clean experiments, reliable evals, and decision-useful metrics.

  • Are excited by the challenge of training models against nuanced behavioral objectives.

  • Have experience building datasets or eval pipelines grounded in human preferences, rubrics, or real-world product behavior.

  • Are comfortable working across the stack, from data generation and labeling strategy to training runs, reward functions, and analysis.

  • Are interested in multimodal AI and in how models can learn from richer interaction signals over time.

  • Want to work on product-shaping research with unusually high stakes for trust, alignment, and long-term user value.

  • Enjoy close collaboration with engineers, designers, and safety researchers to turn frontier research into real systems.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

OpenAI logo

OpenAI

38 jobs posted

View all OpenAI jobs

About the job

Posted on

Mar 11, 2026

Apply before

Apr 10, 2026

Job typeFull-time
Salary Range
$380,000 - $445,000/yr

Never miss a new AI job

Get the latest AI jobs delivered to your inbox every week. Free, no spam.