National Security Risk Mitigation Lead, Global Affairs

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires effective engagement with public policy stakeholders and the broader community impacted by AI. Accordingly, our Global Affairs team builds authentic, collaborative relationships with public officials and the broader AI policymaking community to inform and support our shared work in these domains. We ensure that insights from policymakers inform our work and - in collaboration with our colleagues and external stakeholders - seek to further regulation, industry standards, and safe and beneficial development of AI tools. . 

About the Role

We are looking for an experienced leader to drive OpenAI’s external collaborations and policy work on AI-related national security risks. Reporting to the Head of National Security Policy, you will also work closely with OpenAI’s Safety Systems organization to surface, study, and respond to risks posed by AI capabilities in areas such as cyber, CBRN, or the misuse of autonomous agents. You will closely collaborate with OpenAI’s pathfinding technical teams to understand and communicate rapid AI advancements and associated safety and preparedness implications. You will be responsible for defining and executing on our strategy for engaging with risk-relevant national security stakeholders to address these challenges at scale, including through public-private partnerships and joint evaluations of AI capabilities.  

This position requires both flexibility and creativity in crafting policy responses to novel technical realities, coupled with a deep understanding of government stakeholders’ concerns and constraints. This is not a generalist national security position. It is best suited for those with specialized experience at the intersection of national security risk mitigation and emerging technologies, particularly AI. Experience in the technology sector, knowledge of AI and national security policy landscape, and a technical understanding of generative AI are highly desirable.

This role is based in Washington, D.C. or San Francisco, CA, and may involve regular travel to meet with stakeholders. Relocation assistance is available.

In this role, you will:

  • Define and implement our strategy for engaging national security stakeholders focused on AI risks.
  • Lead partnerships aimed at evaluating and mitigating AI risks in areas like cyber or CBRN, in coordination with government experts.
  • Collaborate closely with OpenAI’s Safety Systems and Research teams to understand technical developments and to communicate their safety and preparedness implications.

You should thrive in this role if you:

  • Are an experienced leader with a strong network and credibility among national security stakeholders.
  • Are well-versed in national security issues related to emerging AI technologies, especially around risk mitigation in cyber, CBRN, or related domains.
  • Excel at coalition-building and enjoy coordinating complex projects with multiple stakeholders, but are also comfortable driving initiatives independently as an accountable individual contributor.
  • Embrace ambiguity and thrive in a rapidly evolving environment where technology and organizational needs are constantly shifting.
  • Are passionate about the potential of AI and technology in general, but thoughtful about its potential risks.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Apply for this job
logo OpenAI Policy Full-time Hybrid (washington, d.c. or san francisco, ca) 📍 Washington, DC Apply Now
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Newsletter

Subscribe and stay updated.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Join our newsletter