Cerebras deploys the CS-1, the Industry’s Fastest AI Computer, at Argonne National Lab

Blog

Andrew Feldman | November 19, 2019

Greetings! Today I am proud and honored to announce that Argonne National Laboratory (ANL), one of the nation’s premier research centers, is the first customer to deploy the Cerebras CS-1 system. This is the result of nearly two years of deep collaboration, and it is extremely fulfilling to us in our mission as a company that the CS-1 is being used for such diverse purposes as understanding cancer drug response rates, traumatic brain injury, gravitational wave detection and parameter estimation, material science and physics – just to name a few. 

The CS-1 deployed at Argonne National Lab

We are fortunate to have great partners at ANL like Rick Stevens, the Associate Laboratory Director for Computing, Environment and Life Sciences, Tom Brettin, the Strategic Program Manager focusing on the development of projects at the intersection of genomics, artificial intelligence, and leadership scale computing, and Hyunseung (Harry) Yoo, the lead researcher for the very first customer model to run on the CS-1, which investigates tumor cells’ response to different drug treatments. The ultimate goal of this project is to develop personalized treatment plans for cancer patients based on their genomic make-up, thus improving survival rates. Check out the video below to see what we’ve been up to together.

Cerebras’s collaboration with Argonne National Lab

Cerebras is building one of the fastest AI computers in the world. It’s very novel, it uses very large scale integration and you can take models and accelerate them by orders of magnitude. And the faster we can train models, the faster we can search through different model configurations, the more rapid our progress will be in scientific problems, whether it’s in cancer, whether it’s in drug design or material science – designing new batteries or photovoltaic materials. All of that is limited by how fast we can do machine learning, and the Cerebras system is at the bleeding edge of accelerators for deep learning.

Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, Argonne National Lab

The CS-1 system, details of which we also revealed today, is the fastest AI computer on the planet. The CS-1 is precision engineering at its best – it hosts the world’s largest and only wafer-scale integrated chip, the Wafer-Scale Engine (WSE). We designed the WSE from the ground up for artificial intelligence work, the result of which is that it eschews unnecessary primitives to support non-AI work such as dense vector graphics processing. The 400,0000 cores on the WSE are optimized for the sparse-linear algebra that underpins all neural network computation. We also invented sparsity harvesting technology to accelerate performance on sparse workloads like deep learning. And we coupled these cores with 18GB of super-fast on-chip SRAM in a single level of memory hierarchy and a blazing fast communication fabric called Swarm, which provides 100Pb/s of bandwidth, and is fully configurable to support the precise communication required for training a user-specified model. 

The CS-1 is fast, so I get the answer fast, and so I can do more experiments. And also it has a lot of memory, so I can extend the model easily, and expand our research. For big models, I have to wait days using the supercomputers. With CS-1 I can get the answer in hours. It’s a big difference – I can engage more with the research. I am always in the context of research. I can keep moving on with that research. That is a huge benefit – continuation without switching context.  

Hyunseung (Harry) Yoo, Software Engineer, Argonne National Lab 

Throughout the design of the Cerebras AI solution, ease of deployment and use was our true north. Deploying the solution requires no changes to existing workflows or to datacenter operations. The CS-1 system provides power and cooling to the WSE using standard datacenter infrastructure, and Terabit-scale I/O with 100 Gigabit ethernet links to any standard switch. In addition, the Cerebras software platform allows Machine Learning researchers to leverage CS-1 performance without changing their existing workflows. Users can define their models using industry-standard ML frameworks such as TensorFlow and PyTorch. A powerful graph compiler automatically converts these models into optimized executables for the CS-1, and a rich set of tools enables intuitive model debugging and profiling.

We can program the CS-1 with common programming languages like Python, and standard Deep Learning frameworks like TensorFlow. The models that we are developing using community-based tools run seamlessly on the CS-1.  

Tom Brettin, Strategic Program Manager, Argonne National Lab

With this breakthrough in performance, the Cerebras CS-1 eliminates the primary impediment to the advancement of artificial intelligence, reducing the time it takes to train models from months to minutes and from weeks to seconds, allowing researchers to be vastly more productive. In so doing the CS-1 reduces the cost of curiosity, accelerating the arrival of the new ideas and techniques that will usher forth tomorrow’s AI. 

If you look back over the last 30 years or so, there’s a handful of inflection points in technology where somebody has a new idea, there’s a new product, and that product sets the standard for how the future evolves, and I think the CS-1 is in that category.

Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, Argonne National Lab

We look forward to collaborating with the rest of the world on solving their hardest deep learning problems. If you’re at Supercomputing this week, stop by the Cerebras booth (#689) and check out our demo. On Tuesday November 19, hear my Co-founder and our Chief Systems Architect, Jean-Phillippe Fricker give a talk on the lessons we learned along the way. We’ll also be participating at several Birds of a Feather sessions, and many members of our team will be at the conference. If not, you can always reach out to us via our website