Senior/Staff AI Research Engineer
Hume AI is seeking talented software engineers interested in working with our AI research team to build state-of-the-art large language models (LLMs). Our new LLM training method—reinforcement learning from human expression (RLHE)—learns human preferences from behavior in millions of audio and video recordings, making LLMs superhumanly helpful, interesting, funny, eloquent, honest, and altruistic. Join us in the heart of New York City and contribute to our endeavor to ensure that AI is guided by human values, the most pivotal challenge (and opportunity) of the 21st century.
About Us
Hume AI is dedicated to building artificial intelligence that is directly optimized for human well-being. We raised a Series B funding round at the beginning of the year and just launched the beta of our next flagship AI model, EVI 2, a foundational audio-language model that drives an empathic AI assistant for any application.
Our models understand subtle tones of voice, word emphasis, facial expression, and more, along with the reactions of listeners. These behaviors reveal our preferences—whether we find things interesting or boring; satisfying or frustrating; funny, eloquent, or dubious. We call learning from these signals “reinforcement learning from human expression” (RLHE). AI models trained with RLHE can serve as better question answerers, copywriters, tutors, call center agents, and more, even in text-only interfaces.
Our goal is to enable a future in which technology draws on an understanding of human emotional expression to better serve human goals. As part of our mission, we also conduct groundbreaking scientific research, publish in leading scientific journals like Nature, and support a non-profit, The Hume Initiative, that has released the first concrete ethical guidelines for empathic AI (www.thehumeinitiative.org). You can learn more about us on our website (https://hume.ai/) and read about us in WIRED, Forbes, and Venturebeat.
About the Role
We are looking for a talented software engineer to work alongside our research scientists to fine-tune a wide range of open-source and proprietary LLMs. In this role, you will build out software systems that support distributed model training, inference, and benchmarking as well as massive-scale data collection, storage, preprocessing, and analysis. You will be working to solve some of today’s most exciting AI research problems at industry scale.
Requirements
- Expertise in the Python ecosystem and popular ML libraries and tools (e.g. PyTorch)
- Experience writing robust and maintainable production-ready code
- Comfort iterating quickly on new and uncertain research directions
- 2+ years of experience training and/or fine-tuning transformer models with large-scale datasets of text, audio, image, and/or video data
Application Note
Please apply only to the position that best aligns with your qualifications. If you submit multiple applications or have applied within the past 6 months, only your initial submission will be considered.