AI insights, faster

Cerebras is a computer systems company dedicated to accelerating deep learning.

The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It is the heart of our deep learning system.

56x larger than any other chip, the WSE delivers more compute, more memory, and more communication bandwidth. This enables AI research at previously-impossible speeds and scale.

The WSE is the largest chip ever built

56x the size of the largest GPU

The Cerebras Wafer Scale Engine 46,225 mm2 with 1.2 Trillion transistors and 400,000 AI-optimized cores.

By comparison, the largest Graphics Processing Unit is 815 mm2 and has 21.1 Billion transistors.

Unlock unprecedented performance with familiar tools

The Cerebras software stack is designed to meet users where they are, integrating with open source ML frameworks like TensorFlow and PyTorch. Our software makes cluster-scale compute resources available to users with today's tools.

Accelerate your AI research

  • Train AI models in a fraction of time, effortlessly

    Provides faster time to solution, with cluster-scale resources on a single chip and with full utilization at any batch size, including batch size 1

  • Unlock new techniques and models

    Runs at full utilization with tensors of any shapes, fat, square and thin, dense and sparse, enabling researchers to explore novel network architectures and optimization techniques

  • Exploit model and data parallelism while staying on-chip

    Provides flexibility for parallel execution, supports model parallelism via layer-pipeline out of the box

  • Design extraordinarily sparse networks

    Translates sparsity in model and data into performance, via a vast array of programmable cores and flexible interconnect

Explore more ideas in less time. Reduce the cost of curiosity.

Stay tuned for more updates on our line of products

Contact us