About the Team
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.
The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.
About the Role
As a technical abuse investigator on the Intelligence and Investigations team, you will be responsible for detecting misuse of our platform or services. Specifically, you will focus on cases where users attempt to use our platform in connection with prohibited activities such as developing or delivering biological and/or chemical threats to harm people, critical resources/infrastructure, or the environment. OpenAI has strict prohibitions and policies in this area, and you will detect, disrupt, and enforce on actors who violate our policies.
This role requires domain-specific expertise, experience investigating sophisticated threats, and the ability to navigate ambiguous signals in a complex and adversarial threat environment.
You will respond to time-sensitive escalations and will be expected to present your investigative work, both in writing and verbally, to key stakeholders across government, industry, and civil society, when required. You will also help inform the company’s evolving threat response and integrity monitoring and mitigation stack, while working closely on individual cases and enforcement assessments.
This role is remote-friendly, though you’re welcome to work from our San Francisco office if desired. The role includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content.
In this role, you will:
Detect, investigate, and disrupt the attempted misuse of OpenAI products for the development or dissemination of biological threats, including dual-use misuse and emerging biothreat vectors. You may also be expected to coordinate across related domains (e.g., chemical threats).
Partner closely with teams across Policy, Legal, Integrity, Global Affairs, and Security to conduct robust investigations, including cross-internet and open-source research to trace and understand abuse and ensure OpenAI’s mitigations address evolving needs in the space.
Develop abuse signals and tracking strategies to proactively detect users attempting dual-use or biohazard-related misuse of our platform and review content for enforcement decisions.
Communicate findings from your investigations with internal stakeholders and, at times, external partners including regulatory or scientific organizations.
Develop a categorical understanding of our product surfaces in the biosecurity space, and work with engineering teams to improve data visibility and internal tooling.
Brief company leadership, and key external stakeholders on your work.
This role requires U.S. government security clearance. To comply with this requirement, applicants for this position must be U.S. citizens.
You might thrive in this role if you:
Have industry-leading experience in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), or related biodefense fields,
Have strong familiarity with technical investigations, especially using SQL and Python, in a government/military and/or tech company
Have demonstrated experience in risk-mitigation (e.g., adversarial thinking and record of success in threat mitigation)
Have worked on investigations related to biological threat actors, malicious dual-use exploitation, or responsible innovation in synthetic biology or bioengineering
Have at least 5+ years of experience tracking misuse and/or abuse in biosecurity or life sciences domains, or equivalent education in these domains
Have at least 2 years of experience developing innovative detection solutions and conducting open-ended research to solve real-world problems
Experience in presenting analytical work in public or policy settings
Have experience scaling and automating processes, especially with language models
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.