Machine Learning Engineer, MLOps & Evaluation

Who we are

At Twelve Labs, we are pioneering the development of cutting-edge multimodal foundation models that have the ability to comprehend videos just like humans do. Our models have redefined the standards in video-language modeling, empowering us with more intuitive and far-reaching capabilities, and fundamentally transforming the way we interact with and analyze various forms of media.

With a remarkable $77 million in Seed and Series A funding, our company is backed by top-tier venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, and prominent AI visionaries and founders such as Fei-Fei Li, Silvio Savarese, Alexandr Wang and more. Headquartered in San Francisco, with an influential APAC presence in Seoul, our global footprint underscores our commitment to driving worldwide innovation.

We are a global company that values the uniqueness of each person’s journey. It is the differences in our cultural, educational, and life experiences that allow us to constantly challenge the status quo. We are looking for individuals who are motivated by our mission and eager to make an impact as we push the bounds of technology to transform the world. Join us as we revolutionize video understanding and multimodal AI.

About the role

As a Software Engineer - Machine Learning at Twelve Labs, you will be a vital member of the ML Deployment & Operations Team. Your primary role is to build and deploy the machine learning pipeline in our ML Infrastructure while using the foundation models provided by the ML Modeling & Research Team. You will ensure seamless deployment of models end-to-end and implement best MLOps practices to automate the integration, deployment, and training process. A critical KPI for this role is minimizing the time from model training to deployment on our machine learning infrastructure and serving the model as efficiently as possible in terms of latency and throughput.  We’re looking for someone who is excited to collaborate across ML Infrastructure, ML Modeling, and Data Team.
In this role, you will
  • Be responsible for Model Serving and ModelOps: manage model-related metadata (using the model registry), implement hardware-accelerated optimization for each model engine, and containerize models for efficient serving.
  • Construct an ML pipeline that proficiently serves the trained foundation models in our ML Infrastructure.
  • Implement best model validation practices by conducting automatic evaluation benchmarking and performing output comparisons.
  • Develop an automatic training/finetune pipeline that includes rigorous data and model validation against the baseline model.
  • You may be a good fit if you have
  • 5+ years of software development experience, including experience in deploying machine learning models
  • 3+ years of experience in building and deploying an end-to-end machine learning pipeline, or equivalent
  • Have experience in establishing and maintaining secure software and system development environments
  • Have experience designing control and sandboxing systems for AI research
  • Willingness to learn the emerging AI technology and a practical mindset on productization
  • Have a black-box level of understanding in Transformer-based neural network
  • Experience in system development in model serving and inference
  • Desired experience
  • Experience with MLOps for managing the entire machine learning lifecycle, including model registry and versioning functionalities
  • Experience with hardware-accelerated optimization techniques
  • Experience in identifying and mitigating pain-points in ML research & modeling processes
  • ML research experience would be helpful, as this role requires interchangeable effort on both research side and software side
  • Relevant tech stack
  • Language: Python, C++, CUDA
  • ML / Platform: PyTorch, Docker, Kubernetes
  • ML Demo page: Gradio, Streamlit
  • MLOps: MLFlow, Weights and Biases
  • Automation: Airflow, Kubeflow
  • Model serving: Triton, FasterTransformer
  • Interview and Onboarding Process

    Recruiter Phone Screen -> Hiring Manager Call -> Technical Interview and/or Take Home Assignment -> Culture Interview -> Reference Checks

    We're also excited to share that we'll do global onboarding in Seoul for all new hires (company-sponsored travel).

    Even if there are a few checkboxes that aren’t ticked through your prior experience, we still encourage you to apply! If you are a 0-to-1 achiever, a ferocious learner, and a kind and fun team player who motivates others, you will find a home at Twelve Labs.

    We welcome applicants from all walks of life and are committed to equal-opportunity employment. We cherish and celebrate diversity not just because it is the right thing to do, but because it makes our company much stronger.

    Benefits and Perks

    🤝 An open and inclusive culture and work environment.
    🧑‍💻 Work closely with a collaborative, mission-driven team on cutting-edge AI technology.
    🦷 Full health, dental, and vision benefits
    ✈️ Extremely flexible PTO and parental leave policy. Office closed the week of Christmas and New Years.
    🏙 Remote-flexible, offices in San Francisco and Seoul and coworking stipend
    🛂 VISA support (such as H1B and OPT transfer for US employees)
    Apply for this job
    logo Twelve Labs Software Engineer Full-time 🌎 Remote 📍 San Francisco, United States Apply Now
    Your subscription could not be saved. Please try again.
    Your subscription has been successful.


    Subscribe and stay updated.

    Your subscription could not be saved. Please try again.
    Your subscription has been successful.

    Join our newsletter