Key Enabling Technologies

Wafer-Scale Integration

850,000 cores on a single wafer.
40 GB of fully distributed memory with single-cycle latency and near-unlimited bandwidth.

Learn more

Nothing But Cache

Fully distributed memory gives each core massive memory bandwidth and single-cycle access. Ideal for applications that are traditionally memory-bound.

Learn more

Gigantic On-Chip Fabric

A gigantic fabric connects 850,000 programmable compute elements with single-cycle latency enabling best-in-class small message passing.

Learn more

Developer Community

If you’re pushing the boundaries of HPC, this is the place to exchange ideas with your peers and our engineers and unlock the power of Cerebras.

Learn more

Software Development Kit

Allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation.

Learn more
Featured Case Studies

Powering Extreme-Scale HPC with Cerebras Wafer-Scale Accelerators

In this paper, we explore HPC challenges today and how the Cerebras architecture can help to accelerate sparse tensor workloads.

Real-Time Computational Physics with Wafer-Scale Processing

Cerebras and NETL achieve two orders of magnitude performance improvements for computational physics using a simple Python API.

TotalEnergies and Cerebras Create Massively Scalable Stencil Algorithm

TotalEnergies used Cerebras to turn a seismic kernel benchmark long accepted to be memory-bound into compute-bound.

The Cerebras Software Development Kit: A Technical Overview

Enables developers to harness the power of wafer-scale computing with the tools and software used by the Cerebras development team.

Cerebras SDK

A general-purpose parallel-computing platform and API allowing software developers to write custom programs (kernels) for Cerebras systems.

UNPRECEDENTED SPEED

Testimonial

“By using innovative new computer architectures, such as the Cerebras WSE, we were able to greatly accelerate speed to solution, while significantly reducing energy to solution on a key workload of field equation modeling.

This work combining the power of supercomputing and AI will deepen our understanding of scientific phenomena and greatly accelerate the potential of fast, real-time, or even faster-than-real-time simulation.”

Dr. Brian J. Anderson

Lab Director

National Energy Technology Laboratory

Ready to get started?

If you are curious about programming for wafer-scale or want to evaluate whether the CS-2 system would be a good fit for your organization, we encourage you to get in touch.

The following data points will help us answer your questions most effectively:

  1. What limits your performance today? Arithmetic? Memory latency or bandwidth? Communication latency or bandwidth? Something else?
  2. Can you give us specific algorithms or example code and data?
  3. How do you develop code today?
  4. In what languages?
  5. What libraries do you use?
  6. What floating point precision do you need? What integer word length?
  7. Are any open benchmarks of interest to you?

Click Here