Member of Technical Staff, Research Engineer (Inference)
Inflection AI is a public benefit corporation leveraging our world class large language model to build the first AI platform focused on the needs of the enterprise.
Who we are:
Inflection AI was re-founded in March of 2024 and our leadership team has assembled a team of kind, innovative, and collaborative individuals focused on building enterprise AI solutions. We are an organization passionate about what we are building, enjoy working together and strive to hire people with diverse backgrounds and experience.
Our first product, Pi, provides an empathetic and conversational chatbot. Pi is a public instance of building from our 350B+ frontier model with our sophisticated fine-tuning (10M+ examples), inference, and orchestration platform. We are now focusing on building new systems that directly support the needs of enterprise customers using this same approach.
Want to work with us? Have questions? Learn more below.
About the Role
Member of Technical Staff, Research Engineer (Inference)
As part of Inflection’s commitment to deploying high-performance models for enterprise applications, our inference team ensures that these models run efficiently and effectively in real-world scenarios. Research engineers in this role focus on optimizing model inference processes, reducing latency, and improving throughput without compromising model performance, ensuring robust deployment in enterprise environments.
This is a good role for you if you:
- Have experience with deploying and optimizing LLMs for inference, both in cloud and on-prem environments.
- Are adept at using tools and frameworks for model optimization and acceleration, such as ONNX, TensorRT, or TVM.
- Enjoy troubleshooting and solving complex problems related to model performance and scaling.
- Have a deep understanding of the trade-offs involved in model inference, including hardware constraints and real-time processing requirements.
- Are proficient with PyTorch and familiar with infrastructure management tools like Docker and Kubernetes for deploying inference pipelines.
Employee Pay Disclosures
At Inflection AI, we aim to attract and retain the best employees and compensate them in a way that appropriately and fairly values their individual contributions to the company. For this role, Inflection AI estimates a starting annual base salary will fall in the range of approximately $175,000 - $325,000 depending on experience. This estimate can vary based on the factors described above, so the actual starting annual base salary may be above or below this range.
Benefits
Inflection AI values and supports our team’s mental and physical health. We are focused on building a positive, safe, inclusive and inspiring place to work. Our benefits include:
- Diverse medical, dental and vision options
- 401k matching program
- Unlimited paid time off
- Parental leave and flexibility for all parents and caregivers
- Support of country-specific visa needs for international employees living in the Bay Area
Interview Process
Apply: Please apply on Linkedin or our website for a specific role.
Interview: Interviews at Inflection proceed in two stages. When possible, we strive to do interviews in person.
- First, you will have an initial conversation with the Hiring Contact or a Recruiter.
- Following that, you will do up to 5 interviews with the team. Technical roles will do at least 2 deep technical screens, which comprise programming exercises, AI exercises, and general technical interviews. We might also invite you to do a take-home exercise or give a presentation. For a non-technical role, please prepare for a role-specific interview such as a portfolio review.
Decision: We strive to get back to you within one week from your final interview.