Our PyTorch interface library is a simple wrapper for PyTorch program exposed through API calls that is easy to add as few extra lines of code for an existing Pytorch implementation.

Get started


The Cerebras SDK allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Request access

Cerebras Model Zoo

This repository contains examples of common deep learning models demonstrating best practices for coding for the Cerebras hardware.


Developer blogs

Accelerating Large Language Model Training with Variable Sparse Pre-training and Dense Fine-tuning

We reduced pre-training FLOPs by 64% using sparsity. To the best of our knowledge, this is the largest GPT model trained with unstructured weight sparsity…

Variable Sequence Length Training for Long-Context Large Language Models

We show it is possible to accelerate the training for large language models with long context capabilities using a simple staged training method. Faster to…

SlimPajama: A 627B token, cleaned and deduplicated version of RedPajama

Today we are releasing SlimPajama – the largest deduplicated, multi-corpora, open-source, dataset for training large language models.

Efficient Large-Scale GPT Training Using a Cerebras Wafer-Scale Cluster

Cerebras has built a platform for push-button training of large language models that can accelerate time to insights without having to orchestrate across a…


Don’t see your question?

Send us an email at

Please find our example reference model implementation here: To get access to our full list, please contact us at 

Please sign up for our newsletter!