Our PyTorch interface library is a simple wrapper for PyTorch program exposed through API calls that is easy to add as few extra lines of code for an existing Pytorch implementation.

Get started


The Cerebras SDK allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Request access

Cerebras Model Zoo

This repository contains examples of common deep learning models demonstrating best practices for coding for the Cerebras hardware.


Developer blogs

US DOE Achieves 88x Performance Speedup with Cerebras CS-2 Over H100 in Materials Modeling

Using the Cerebras CS-2, NETL implements the venerable Ising model to achieve a 88x performance over a highly optimized CUDA code running on an NVIDIA H100

Cerebras Breaks Exascale Record for Molecular Dynamics Simulations

Cerebras has set a new record for molecular dynamics simulation speed that goes far beyond the exascale level. While this breakthrough has wide-ranging impacts…

Supercharge your HPC Research with the Cerebras SDK

Cerebras SDK 1.1.0, our second publicly available release, includes initial support for the WSE-3. Check out what researchers have been doing with the SDK, and…

Accelerating Large Language Model Training with Variable Sparse Pre-training and Dense Fine-tuning

We reduced pre-training FLOPs by 64% using sparsity. To the best of our knowledge, this is the largest GPT model trained with unstructured weight sparsity…


Don’t see your question?

Send us an email at

Please find our example reference model implementation here: To get access to our full list, please contact us at 

Please sign up for our newsletter!