PyTorch

Our PyTorch interface library is a simple wrapper for PyTorch program exposed through API calls that is easy to add as few extra lines of code for an existing Pytorch implementation.

Get started

SDK

The Cerebras SDK allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Request access

Cerebras Model Zoo

This repository contains examples of common deep learning models demonstrating best practices for coding for the Cerebras hardware.

Repo

Developer blogs

Accelerating Large Language Model Training with Variable Sparse Pre-training and Dense Fine-tuning

We reduced pre-training FLOPs by 64% using sparsity. To the best of our knowledge, this is the largest GPT model trained with unstructured weight sparsity…

Variable Sequence Length Training for Long-Context Large Language Models

We show it is possible to accelerate the training for large language models with long context capabilities using a simple staged training method. Faster to…

SlimPajama: A 627B token, cleaned and deduplicated version of RedPajama

Today we are releasing SlimPajama – the largest deduplicated, multi-corpora, open-source, dataset for training large language models.

Efficient Large-Scale GPT Training Using a Cerebras Wafer-Scale Cluster

Cerebras has built a platform for push-button training of large language models that can accelerate time to insights without having to orchestrate across a…

FAQ

Don’t see your question?

Send us an email at developer@cerebras.net

Please find our example reference model implementation here: https://github.com/Cerebras/cerebras_reference_implementations. To get access to our full list, please contact us at developer@cerebras.net 

Please sign up for our newsletter!