ML Integration and Ops Engineer

Sunnyvale CA, San Diego CA, or Toronto Canada

Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.

We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.

About The Role

As an MTS (ML Integration and Ops Engineer), you will play a pivotal role in bringing together all software and hardware components that makes large scale LLM model training simple and easy to use. You will be part of MIQ (ML Integration and Quality) team that will focus on SW components feature integration, ML training, pre deployment/production validation, driving POC's for customers and managing customer workloads. As part of this role, you will influence the best testing practice, good debugging methodology, effective cross team communication and advocate for world-class products.

  • Drive technical projects involving multiple teams, various software and hardware components coming together to make large scale LLM model training simple and easy to use 
  • Bring good integration methodology, effective communication and strong debugging skills 
  • Break down complex tasks into smaller tasks. Be a problem solver. 
  • Automation of workflows, testbed setups and building tools to monitor/debug   
  • Implement creative ways to break Cerebras software and identify potential problems 
  • Contribute to developing SW specifications with a focus on ML products 
Skills & Qualifications 
  • Master's degree in computer science or EE with 0-6 years of industry experience 
  • Experience in product validation for compute/machine learning/networking/storage systems within a large-scale enterprise environment 
  • Experience debugging issues in large distributed systems environment 
  • Stong automation and programming skills using one or more programming languages like python, C++ or go 
  • Strong knowledge of software system design 
  • Knowledge of ML workflows and frameworks like Tensorflow or PyTorch 
  • Hands on experience with training LLMs.  
  • Hands on experience working with container, Kubernetes.  
  • Experience in driving projects across multiple teams.
Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU
  2. Publish and open source their cutting-edge AI research
  3. Work on one of the fastest AI supercomputers in the world
  4. Enjoy job stability with startup vitality
  5. Our simple, non-corporate work culture that respects individual beliefs

Read our blog: Five Reasons to Join Cerebras in 2024.

Apply today and become part of the forefront of groundbreaking advancements in AI.

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Apply for this Job
* Required

Privacy Preference Center