Machine Learning Engineer, Platforms

About Stability: 

Stability AI is a community and mission driven, open-source artificial intelligence company that cares deeply about real-world implications and applications. Our most considerable advances grow from our diversity in working across multiple teams and disciplines. We are unafraid to go against established norms and explore creativity. We are motivated to generate breakthrough ideas and convert them into tangible solutions. Our vibrant communities consist of experts, leaders and partners across the globe who are developing cutting-edge open AI models for Image, Language, Audio, Video, 3D and Biology.

About the role: 

We are looking for Machine Learning engineers to work on our platform and inference team who are passionate about generative models and creative applications of AI. In particular, we are looking for people who have experience of developing model serving pipelines to operate at scale and have knowledge of state of the art techniques for optimisation and feature development. We want highly creative ML engineers who are motivated to push the boundaries of generative models. You will have access to state-of-the-art high performance computing resources and you will be able to work alongside top researchers and engineers to truly make an impact in the fast growing world of generative AI.

Responsibilities:  

  • Lead efforts to drive the design development and production of customer-facing ML systems, with specific reference to inference and API environments 
  • Work with the Platform and Inference teams on building pipelines for the next generation of models, where you may assist with areas such as optimization, model tuning and deployment, HPC clusters, tooling
  • Be a strategic thought partner for leaders across the organization on driving business impact through machine learning
  • Work on the Commercial side - productionizing generative models, and building the infrastructure to serve them at scale
  • Be part of the team to bring new Stability models and pipelines into existence for API customers
  • Prototype and productionize inference platform improvements and new features 

Qualifications:

  • 5+ years working on machine learning projects, including inference and pipeline development
  • Solid knowledge of Python scientific stack, PyTorch and at least one high-performance inference framework (e.g. TensorRT)
  • Experience profiling and optimizing deep neural networks, including knowledge of GPU profiling tools such as NVIDIA Nsight
  • Familiarity with Python-based image manipulation/encoding/decoding frameworks, such as OpenCV
  • Experience with cloud orchestration systems such as Kubernetes and cloud providers such as AWS, GCP, and Azure
  • Ability to write robust and maintainable client-server architectures and APIs
  • Ability to rapidly prototype solutions and iterate on them with tight product deadlines
  • Experience with training and/or deploying ML models with Amazon AWS (Sagemaker a plus) or Google Cloud
  • Strong communication, collaboration, and documentation skills
  • Experience with building interactive web demos that serve generative ML models
  • Experience with the open-source ML ecosystem (HuggingFace, W&B, etc.)
  • Experience with Linux and command line tools

Equal Employment Opportunity:

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or other legally protected statuses.

Apply for this job
logo Stability AI Machine Learning Full-time Onsite Apply Now
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Newsletter

Subscribe and stay updated.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Join our newsletter