European AI Safety Policy Lead - Technical

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires effective engagement with public policy stakeholders and the broader community impacted by AI. Accordingly, our Global Affairs team builds authentic, collaborative relationships with public officials and the broader AI policymaking community to inform and support our shared work in these domains. We ensure that insights from policymakers inform our work and - in collaboration with our colleagues and external stakeholders - help shape policy guardrails, industry standards, and safe and beneficial development of AI tools. 

About the Role

As the AI Safety Policy Lead, you will be engaging on the full spectrum of policy issues on AI Safety and support the Global Affairs team in the region more broadly with technical expertise and knowledge on LLMs and AI technologies.

OpenAI releases industry-leading research and tools. You will face new challenges as the impact of cutting edge generative AI technologies continues to be explored and as the needs of the organization evolve. Day-to-day work may encompass anything from helping to shape strategic initiatives and policy documents to preparing our leaders for engagements with government officials or representing OpenAI in private and public forums. 

We are looking for a self-directed and creative individual combining a technical and research background in LLMs and AI technologies with experience in engaging effectively with policy-makers, research institutes, academics and civil society. 

This strategic yet hands-on role will report to the Head of European Policy & Partnerships and work closely with key internal and external partners. 

This role will be based out of London and will require frequent travel to meet with key stakeholders. We offer relocation assistance to new employees. 

We're looking for a blend of qualifications, including:

  • 3-5 years of experience in (technical) research and policy work on AI Safety
  • Demonstrated interest and ability to engage with policy-makers, regulators, civil society and academics on nuanced discussions around the wide range of AI Safety issues
  • Technical background (ideally a Masters or PhD degree in ML/AI) with deep understanding of LLMs/AI and how they function and are build and trained with practical experience in thinking about how to do that in safe way
  • Worked on topics around: AI risk assessment, model safety, robustness, misinformation/disinformation, etc. and has ideally advised governments on policy actions and work in this space
  • Existing network and credibility within the AI Safety community in Europe
  • Ability to assess and understand the impact of legislative and regulatory proposals on OpenAI’s product and research roadmap

You’ll thrive in this role if: 

  • Established network and credibility with EU Member States and international policymakers, regulators, civil society, and other stakeholders
  • Engineering-level understanding of AI technology and ability to get to answers on tricky technical questions yourself (think reading arxiv papers or codebases to find an answer to a question from a policy-maker)
  • Sound judgment and outstanding personal integrity
  • Ability to execute in fast and flexible environments through rapid cycles of analysis, decision, and action
  • Excellent communication, presentation, and interpersonal skills, with the ability to convey complex technical and policy concepts to diverse audiences
  • Strong strategic thinking, problem-solving, and project management skills
  • Demonstrated knowledge and understanding of the European Union policy-making system, institutions, and processes, and the key policy issues and debates related to AI
  • Track record of effectively working with cross-functional teams, especially engineering and research teams, and aligning a diverse range of internal and external partners
  • Genuine care and knowledge about the impact of technology on society
  • Previous work on AI issues and technical AI development expertise a significant plus

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status. 

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Apply for this job
logo OpenAI AI Safety Policy Lead FullTime Flexible 📍 London, UK Apply Now
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Newsletter

Subscribe and stay updated.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Join our newsletter