Our PyTorch interface library is a simple wrapper for PyTorch program exposed through API calls that is easy to add as few extra lines of code for an existing Pytorch implementation.

Get started


Integration of TensorFlow is via Cerebras Estimator, which is a wrapper class we developed based on standard TensorFlow Estimator and standard TensorFlow semantics.

Get started


The Cerebras SDK allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Request access

Cerebras Model Zoo

This repository contains examples of common deep learning models demonstrating best practices for coding for the Cerebras hardware.


Developer blogs

Efficient Large-Scale GPT Training Using a Cerebras Wafer-Scale Cluster

Cerebras has built a platform for push-button training of large language models that can accelerate time to insights without having to orchestrate across a…

Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning [Updated]

Our ML-optimized architecture enables the largest models to run on a single device. With data parallel-only scale out and native unstructured sparsity…

Fine-Tuning with Cerebras AI Model Studio Launchpad

Cerebras shares research showing smaller foundation models that are fine-tuned on domain-specific tasks outperform larger foundation models.

Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models

Cerebras open sources seven GPT-3 models from 111 million to 13 billion parameters. Trained using the Chinchilla formula, these models set new benchmarks for…


Don’t see your question?

Send us an email at

Please find our example reference model implementation here: To get access to our full list, please contact us at 

Please sign up for our newsletter!