Technical Program Manager, Responsible Scaling Policy at Anthropic

Get hired in a next generation AI company right now.
"Craiglist for jobs in AI" - Tony Rose

As seen in

Designer Daily Report
Trends Weekly
La Vanguardia
La Vanguardia

đź–¤ If you like MoAIJobs, give us a shoutout on đť•Ź

Anthropic
Technical Program Manager, Responsible Scaling Policy
San Francisco

Share:

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Our Responsible Scaling Policy defines a series of capability thresholds – AI Safety Levels (ASLs) – that represent increasing risks – crossing an ASL threshold would trigger a commitment to more stringent safety, security, and operational measures, intended to handle the increased level of risk. 

We are seeking a Technical Program Manager to own and drive coordination of our Responsible Scaling Policy implementation, a program that spans our Engineering, Research, and Product orgs.

About the Team
In this role, you will work as a member of our centralized TPM function. Strong soft skills are paramount, as you will be front and center driving this top priority initiative. This will require generating buy-in, balancing various opinions, and competing for attention in our rapidly scaling environment.
 
This role is a great fit for someone who has both seen excellence at scale and operated in a rapidly scaling environment. We are seeking candidates with existing TPM expertise but who are comfortable acting as adaptable generalists who add value fast. 
We excel at maintaining a broad view of our work but diving deep into the details when necessary. We understand company-wide safety and business goals, translate and organize them into technical projects, and drive execution. We are adept at engaging with both non-technical and technical stakeholders at all levels of the company, including executive leadership.

Responsibilities:

    • Develop detailed project plans, timelines, and resourcing strategies for RSP implementation.
    • Provide clear and transparent reporting on program status, issues, and risks to executives and stakeholders, including Anthropic’s Responsible Scaling Officer.
    • Lead cross-functional coordination and planning for RSP-related projects, acting as the glue between product, engineering, research, and others.
    • Collaborate with team leads to define, scope, and sequence RSP-related projects which sit between teams’ natural scope of work.
    • Implement scalable program management frameworks, playbooks, and best practices.
    • Contribute to systems, processes, and tools to support technical program management and increase team productivity.
    • Foster a culture of accountability, rigor, and continuous improvement on the technical program management team.

Representative Projects:

    • Driving company-wide progress on initiatives related to the commitments listed in Anthropic’s Responsible Scaling Policy (RSP).
    • Developing a systematic way of gathering and aggregating internal forecasts on AI capabilities, to help the executive team and board estimate proximity to higher ASLs
    • Synthesizing information to feed into our quarterly RSP implementation report which the Responsible Scaling Officer presents to the Board and Long-Term Benefit Trust. 
    • Synthesizing feedback from teams around operationalizing commitments, to feed back into future iterations and improvements to the policy. 

You might be a good fit if you:

    • Have 2+ years experience in technical program and project management, with a track record of successfully delivering complex and cross-functional technical projects.
    • Could help teams operationalize the high-level RSP commitments leaning on familiarity with concepts in the RSP such as machine learning research, model evaluations, and safety research.
    • Thrive in unstructured environments, and have a knack for bringing order to chaos, thoughtfully balancing setting strategic priorities with rapid and high-quality execution.
    • Have strong interpersonal and communication skills that enable you to influence without authority, build cross-organizational support, cooperation and action around initiatives and process adoption.
    • Are deeply passionate about Anthropic’s safety mission and ensuring that AI is developed safely.

Annual Salary (USD)

    • The expected salary range for this position is $280,000 to $320,000 USD
Hybrid policy & US visa sponsorship: Currently, we expect all staff to be in our office at least 25% of the time. We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.

Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.

US Benefits -  The following benefits are for our US-based employees:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.

UK Benefits -  The following benefits are for our UK-based employees:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Private health, dental, and vision insurance for you and your dependents.
- Pension contribution (matching 4% of your salary).
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Health cash plan.
- Life insurance and income protection.
- Daily lunches and snacks in our office.

* This compensation and benefits information is based on Anthropic’s good faith estimate for this position as of the date of publication and may be modified in the future. Employees based outside of the UK or US will receive a different benefits package. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.

How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!
Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.

Please mention that you found this job on Moaijobs, this helps us get more companies to post here, thanks!