About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
Safeguards’s Dangerous Asymmetric Harms team focuses on detecting and preventing harmful usage of Anthropic's AI services through technical safeguards and policy solutions. We work across three primary risk domains: CBRN (Chemical, Biological, Radiological, and Nuclear), Cyber, and Dangerous Asymmetric Advanced Technologies (DAAT).
This is a dual hatted technical and policy role, primarily focused on the technical side, which is creating and running cyber evaluations from low to catastrophic harms. You will also communicate and own policy boundaries for the most impactful technologies ever created.
We are looking for a teammate who can execute rapidly, maintain high throughput, and bring a strong builder mindset to solving complex problems. The ideal candidate will be able to quickly prototype and iterate on evaluation infrastructure while maintaining high engineering standards. You'll be building systems to evaluate capabilities that have never existed before, requiring creative solutions and rigorous implementation.
Responsibilities:
- Design and implement robust evaluation infrastructure to measure model capabilities and risks across Cyber, CBRNE, and Dangerous Asymmetric Advanced Technologies, with a primary focus on Cyber.
- Independently build technical projects to build and scale evaluation systems that could become industry standards
- Help build and run systems that conduct deep automated analysis of cyber harm across all Anthropic surfaces
- Build evaluation infrastructure that scales across our sandboxing systems
- Test and measure AI capability uplift to anticipate, measure, and test Safeguards for CBRN, Cyber, and Dangerous Asymmetric Advanced Technologies, with a primary focus on Cyber
- Create and run evaluations independently to test cyber policies
- Design heuristics for prohibited and dual-use cyber categories for classifier training
- Partner with research and engineering teams to implement cyber safety systems
- Support AI uplift testing with operational insights on threat patterns
- Own policies for emerging technologies outside traditional cyber/CBRN frameworks
- Address critical blind spots at domain intersections (cyber-physical attacks, bio-cyber threats)
- Support explosive device policies and advanced delivery systems
- Create threat models for novel asymmetric technologies (drone swarms, space weapons, etc.)
- Coordinate with CBRN and Cyber Policy Managers on overlapping threats
You may be a good fit if you have:
Must-Haves:
- Familiar with basics of prompting large language models (LLM)
- Familiar with utilization of LLMs both as generative models to draw samples from and as classifiers
- Ability to come up with intelligent language model “pipelines” to automate tasks
- Very comfortable with Python - not just "able to code" but fluent in building complex systems
- Strong async Python skills - critical for scaling evaluations efficiently
- Hacker and fast prototyping mindset - experience finding vulnerabilities and thinking adversarially
- Self-sufficient builder - can create and run evaluations without engineering support
- Systems thinking - comfortable with complex setups and debugging
Key Attributes:
- Hacker mentality - relentlessly motivated to find gaps
- Curious and creative - approaches problems from unexpected angles
- Dependable under pressure - manages tight deadlines without dropping balls
- Technical depth with policy awareness - bridges both domains effectively
Preferred Qualifications:
- Hands on keyboard offensive security experience - background in penetration testing, red teaming, vulnerability research, and/or
- Security certifications - SANS certifications, OSCP, or similar credentials, with a particular focus on ICS/SCADA systems
- Experience with AI evaluation benchmarks and frameworks
- Background in AI/ML security or adversarial testing
- Previous work at intersection of security and policy
Representative Projects
- Build infrastructure for running large-scale model evaluations across multiple risk domains
- Create tools for rapid evaluation prototyping and iteration
- Contribute to evaluation frameworks that could become industry standards
- Design and implement custom testing environments for specific capability assessments
- Develop monitoring and analysis systems for evaluation results
- Collaborate with domain experts to translate theoretical risks into practical tests, such as cyber ranges and autonomous replication environments
Candidates Need Not Have
- Domain expertise in specific risk areas
- 100% of the skills needed to perform the job
- Prior experience with AI model evaluation
The expected salary range for this position is:
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.