Research Engineer

Introduction
The Center for AI Safety is a non-profit organization based in San Francisco, dedicated to research and the advancement of the field of AI safety. We believe that artificial intelligence will be a powerful technology that will dramatically change society, and our aim is to mitigate potential high-consequence risks. We are doing this through:

- Technical and conceptual research;
- Promotion of safety within the broader machine learning community;
- Collaboration with academics, industry researchers, philosophers, and policy drivers who are uniquely positioned to influence the future of AI safety.

We’re seeking a skilled research engineer who is aligned with our mission to develop and promote the field of AI safety. As a research engineer here, you will pursue a variety of research projects in fields such as Power Aversion, Trojans, Machine Ethics, and Reward Hacking. You will assist in writing and submitting articles for publication at top conferences. You will collaborate with both internal research staff (e.g. Dan Hendrycks) as well as academics at top universities (including Stanford, UC Berkeley, CMU, or MIT). You will have access to the latest technology with our compute cluster to run experiments at scale on large language models. You’ll also have the opportunity to grow the field of AI safety by advising and leading competitions, workshops and socials at top ML conferences and identifying other ways for CAIS to engage with the global research community.
Example Projects Include:
  • Fine-tuning large-scale transformers and evaluating them under different data domains such as HarmBench.
  • Creating and designing new datasets to evaluate the robustness of different models.
  • Evaluating models in sequential decision-making games.
  • Developing and launching ML competitions (e.g., Trojan Detection Challenge) and other initiatives to grow the AI safety field.
  • Collaborating with academics on research ranging from transparency, robustness, honest AI, interpretable uncertainty, and so on.
  • You might be a good fit if you:
  • Have a degree in Computer Science (or related machine learning field).
  • Have co-authored an NLP paper in a top conference. Bonus if you are the first author on the paper. 
  • Are aligned with our mission to reduce societal-scale risks from AI. Bonus if you have a point of view on the most tractable and important technical problems to solve.
  • Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature. Bonus if this spurs ideas on directions for future research.
  • Are familiar with relevant frameworks and libraries (e.g., pytorch and huggingface).
  • Have experience launching and training distributed ML jobs i.e. can scale machine learning systems to hundreds or thousands of GPUs.
  • Communicate clearly and promptly with teammates, including with non-technical teams.
  • Have ideas and are motivated by the idea of building the field of AI safety through AI-safety focused events at top ML conferences and/or other initiatives.
  • Can work well with others in both large and small groups. Have experience collaborating with technical and non-technical professionals.
  • Benefits:
  • Health, dental, vision insurance for you and your dependents (100% coverage for employees)
  • Competitive PTO401(k) plan with 4% matching
  • Lunch and dinner at the office
  • Commuter benefits
  • Personalized ergonomic technology set-up
  • Access to some of the top talent working on technical and conceptual research in the space

  • We’re a young organization and currently building out our benefits offerings, so we expect this list to grow!
    For this role we are considering mid level research engineers with a salary pay range of $120-160,000. We are a small organization in a quickly evolving field, and we believe in-person collaboration is key to our success, so we are looking for candidates who live in the San Francisco Bay Area or are willing to relocate.
    If you have any questions about the role, feel free to reach out to hiring@safe.ai.

    The Center of AI Safety is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.

    Some studies have found that a higher percentage of women and underrepresented minority candidates won't apply if they don't meet every listed qualification. The Center for AI Safety values candidates of all backgrounds. If you find yourself excited by the position but you don't check every box in the description, we encourage you to apply anyway!
    Apply for this job

    Other AI Jobs like this

    logo Center for AI Safety Research Engineer Full-Time 💰 120K - 160K $ / annual On-site 📍 San Francisco, CA Apply Now
    Your subscription could not be saved. Please try again.
    Your subscription has been successful.

    Newsletter

    Subscribe and stay updated.

    Your subscription could not be saved. Please try again.
    Your subscription has been successful.

    Join our newsletter