Anthropic logo

Anthropic

Company

Senior Research Scientist, Reward Models

Remote

Job Description

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role

As a Senior Research Scientist on our Reward Models team, you'll lead research efforts to improve how we specify and learn human preferences at scale. Your work will directly shape how our models understand and optimize for what humans actually want — enabling Claude to be more useful, more reliable, and better aligned with human values.

This role focuses on pushing the frontier of reward modeling for large language models. You'll develop novel architectures and training methodologies for RLHF, research new approaches to LLM-based evaluation and grading (including rubric-based methods), and investigate techniques to identify and mitigate reward hacking. You'll collaborate closely with teams across Anthropic, including Finetuning, Alignment Science, and our broader research organization, to ensure your work translates into concrete improvements in both model capabilities and safety. 

We're looking for someone who can drive ambitious research agendas while also shipping practical improvements to production systems. You'll have the opportunity to work on some of the most important open problems in AI alignment, with access to frontier models and significant computational resources. Your work will directly advance the science of how we train AI systems to be both highly capable and safe. 

Note: For this role, we conduct all interviews in Python.

Responsibilities

  • Lead research on novel reward model architectures and training approaches for RLHF
  • Develop and evaluate LLM-based grading and evaluation methods, including rubric-driven approaches that improve consistency and interpretability
  • Research techniques to detect, characterize, and mitigate reward hacking and specification gaming
  • Design experiments to understand reward model generalization, robustness, and failure modes
  • Collaborate with the Finetuning team to translate research insights into improvements for production training pipelines
  • Contribute to research publications, blog posts, and internal documentation
  • Mentor other researchers and help build institutional knowledge around reward modeling

You may be a good fit if you

  • Have a track record of research contributions in reward modeling, RLHF, or closely related areas of machine learning
  • Have experience training and evaluating reward models for large language models
  • Are comfortable designing and running large-scale experiments with significant computational resources
  • Can work effectively across research and engineering, iterating quickly while maintaining scientific rigor
  • Enjoy collaborative research and can communicate complex ideas clearly to diverse audiences
  • Care deeply about building AI systems that are both highly capable and safe

Strong candidates may also

  • Have published research on reward modeling, preference learning, or RLHF
  • Have experience with LLM-as-judge approaches, including calibration and reliability challenges
  • Have worked on reward hacking, specification gaming, or related robustness problems
  • Have experience with constitutional AI, debate, or other scalable oversight approaches
  • Have contributed to production ML systems at scale
  • Have familiarity with interpretability techniques as applied to understanding reward model behavior

The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.

Annual Salary:
$340,000$425,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

Anthropic logo

Anthropic

88 jobs posted

View all Anthropic jobs

About the job

Posted on

Dec 17, 2025

Apply before

Jan 16, 2026

Job typeFull-time

Share this job opportunity