Key Enabling Technologies

An HPC Cluster on a Single Chip

With 900,000 cores and petaflops of compute, each CS-3 delivers the performance an HPC cluster in a single device.

Learn more

44GB of On-Chip Memory

With 880x more on-chip memory than H100s, run the most daunting simulations with almost unlimited bandwidth.

Learn more

Gigantic On-Chip Fabric

A gigantic fabric connects 900,000 programmable compute elements with single-cycle latency enabling best-in-class small message passing.

Learn more

Software Development Kit

Allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Learn more
Featured Case Studies

Cerebras and National Laboratories Breakthrough

Cerebras collaborated with researchers at Sandia National Laboratories, Lawrence Livermore, Los Alamos National Laboratory, and the National Nuclear Security Administration on this record setting result.

Cerebras Systems Announces 130x Performance Improvement on Key Nuclear Energy Simulation over Nvidia A100 GPUs

Continuous Energy Monte Carlo Particle Transport Kernel Outperforms Highly Optimized GPU Version, Unlocking New Potential in Fission and Fusion Reactor Simulations

Scaling the "Memory Wall” for Multi-Dimensional Seismic Processing

Cerebras and KAUST achieve real memory bandwidth performance that rivals best-case performance on world-leading supercomputers for seismic processing

Real-Time Computational Physics with Wafer-Scale Processing

Cerebras and NETL achieve two orders of magnitude performance improvements for computational physics using a simple Python API.

Cerebras SDK

A general-purpose parallel-computing platform and API allowing software developers to write custom programs (kernels) for Cerebras systems.



“By using innovative new computer architectures, such as the Cerebras WSE, we were able to greatly accelerate speed to solution, while significantly reducing energy to solution on a key workload of field equation modeling.

This work combining the power of supercomputing and AI will deepen our understanding of scientific phenomena and greatly accelerate the potential of fast, real-time, or even faster-than-real-time simulation.”

Dr. Brian J. Anderson

Lab Director

National Energy Technology Laboratory

Ready to get started?

If you are curious about programming for wafer-scale or want to evaluate whether the CS-2 system would be a good fit for your organization, we encourage you to get in touch.

The following data points will help us answer your questions most effectively:

  1. What limits your performance today? Arithmetic? Memory latency or bandwidth? Communication latency or bandwidth? Something else?
  2. Can you give us specific algorithms or example code and data?
  3. How do you develop code today?
  4. In what languages?
  5. What libraries do you use?
  6. What floating point precision do you need? What integer word length?
  7. Are any open benchmarks of interest to you?

Click Here