PyTorch

Our PyTorch interface library is a simple wrapper for PyTorch program exposed through API calls that is easy to add as few extra lines of code for an existing Pytorch implementation.

Get started

TensorFlow

Integration of TensorFlow is via Cerebras Estimator, which is a wrapper class we developed based on standard TensorFlow Estimator and standard TensorFlow semantics.

Get started

SDK

The Cerebras SDK allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Request access

Example Reference Implementations

This repository contains examples of common deep learning models demonstrating best practices for coding for the Cerebras hardware.

Repo

Developer blogs

What is Appliance Mode?

We created Appliance Mode for simplicity, ease of use, and robustness in training massive natural language AI models on the Cerebras Wafer-Scale Cluster.

Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning

Our ML-optimized architecture enables the largest models to run on a single device. With data parallel-only scale out and native unstructured sparsity…

Context is Everything: Why Maximum Sequence Length Matters

GPU-Impossible™ sequence lengths on Cerebras systems may enable breakthroughs in Natural Language Understanding, drug discovery and genomics.

Training Multi-Billion-Parameter Models on a Single Cerebras System is Easy

Changing model size is trivial on Cerebras, rather than a major science project using conventional graphics accelerators.

FAQ

Don’t see your question?

Send us an email at developer@cerebras.net

Please find our example reference model implementation here: https://github.com/Cerebras/cerebras_reference_implementations. To get access to our full list, please contact us at developer@cerebras.net 

Please sign up for our newsletter!