Our PyTorch interface library is a simple wrapper for PyTorch program exposed through API calls that is easy to add as few extra lines of code for an existing Pytorch implementation.

Get started


Integration of TensorFlow is via Cerebras Estimator, which is a wrapper class we developed based on standard TensorFlow Estimator and standard TensorFlow semantics.

Get started


The Cerebras SDK allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Request access

Cerebras Model Zoo

This repository contains examples of common deep learning models demonstrating best practices for coding for the Cerebras hardware.


Developer blogs

Cerebras Software Platform R1.7 is Out!

Our new release expands PyTorch support, releases code repositories for a range of extreme-scale GPT-style models, and introduces unique computer vision…

Harnessing the Power of Sparsity for Large GPT AI Models

Enabling innovation of novel sparse ML techniques to accelerate training and inference on large-scale language models.

Genomics in Unparalleled Resolution: Cerebras Wafer-Scale Cluster Trains Large Language Models on the Full COVID Genome Sequence

Our joint work with Argonne National Laboratory (ANL) and NVIDIA won the 2022 Gordon Bell Special Prize for HPC-Based COVID-19 Research.

Real-Time Computational Physics with Wafer-Scale Processing

Cerebras and NETL achieve two orders of magnitude performance improvements for computational physics using a simple Python API for the Wafer-Scale Engine.


Don’t see your question?

Send us an email at

Please find our example reference model implementation here: To get access to our full list, please contact us at 

Please sign up for our newsletter!