LOS ALTOS, Calif.–(BUSINESS WIRE)–Cerebras Systems, the pioneer in high performance artificial intelligence (AI) compute, today announced record-breaking performance on a scientific compute workload. In collaboration with the Department of Energy’s National Energy Technology Laboratory (NETL), Cerebras demonstrated its CS-1 delivering speeds beyond what either CPUs or GPUs are currently able to achieve. Specifically, the CS-1 was 200 times faster than the Joule Supercomputer on the key workload of Computational Fluid Dynamics (CFD).

.@CerebrasSystems and National Energy Technology Laboratory Set New Compute Milestone–At Two Hundred Times the Speed of a #Supercomputer, CS-1 Delivers Performance Unattainable with CPUs and GPUs

Tweet this

“NETL researchers and the Cerebras team collaborated to extend the frontier of what is possible,” said Dr. Brian J. Anderson, Lab Director at NETL. “This work at the intersection of supercomputing and AI will extend our understanding of fundamental scientific phenomena, including combustion and how to make combustion more efficient. Together, NETL and Cerebras are using innovative new computer architectures to further the NETL Mission and benefit the United States.”

The workload under test was to solve a large, sparse, structured system of linear equations. These calculations underpin the modeling of physical phenomena. They are the basis for modeling the movements of fluids and are fundamental in solving problems as diverse as the design of airplane wings, understanding how to burn coal more cleanly in a power plant, and how to better extract oil and gas from an oil field. Researchers from NETL and Cerebras will present their findings at the SC20 Conference, and a paper based on the work can be found on arXiv.

The Joule supercomputer is the 82nd fastest supercomputer in the world as ranked by the TOP 500.org and the 24th fastest in the United States. Costing tens of millions of dollars to build, it contains 84,000 CPU cores, which are spread over dozens of racks and consume 450kw of power. A single Cerebras CS-1 is 26 inches tall, fits in one-third of a rack and is powered by the industry’s only wafer-scale processing engine, Cerebras’ WSE. By bringing together exceptional memory performance with massive bandwidth, low latency interprocessor communication and an architecture optimized for high bandwidth computing, the CS-1 was able to outperform the Joule supercomputer.

The CS-1 was proven 200 times faster than the largest cluster of CPUs that Joule could allocate to the problem of this size — 16,384 CPU cores. Because of the complexity of the problem and the fact it scales poorly, adding additional CPU cores could not accelerate the time to find a solution. In other words, there was no number of CPUs that together could solve this problem in the time achieved by a single CS-1. The same holds true for Graphics Processing Units (GPUs). The CS-1 performed these calculations 10,000 times faster than a GPU. GPUs are not better than CPUs in strong scaling when the size of the problem is fixed. No matter how many CPUs or GPUs are thrown at a fixed size problem, there is a maximum attainable performance.

“Cerebras is proud to extend its work with NETL and produce extraordinary results on one of the foundational workloads in scientific compute,” said Andrew Feldman, co-founder and CEO, Cerebras Systems. “This work opens the door to major breakthroughs in scientific computing performance, as the CS-1, with its wafer-scale engine, overcomes traditional barriers to high achieved performance, enabling real-time and other use cases precluded by the failure of strong scaling. Because of the radical memory and communication acceleration that wafer scale integration creates, we have been able to go far beyond what is possible in a discrete, single chip processor, be it a CPU or a GPU.”

The research was led by Dr. Dirk Van Essendelft, Machine Learning and Data Science Engineer at NETL, and Michael James, Cerebras co-founder and Chief Architect of Advanced Technologies at Cerebras Systems. The results came after months of work and continue the close collaboration between the Department of Energy’s NETL laboratory scientists and Cerebras Systems. In September 2019, the Department of Energy announced its partnership with Cerebras, including deployments with Argonne National Laboratory and Lawrence Livermore National Laboratory.

The Cerebras CS-1 was announced in November 2019 to wide acclaim. It has won numerous awards including Fast Company’s Best World Changing Ideas, IEEE Spectrum’s Emerging Technology Awards, Forbes AI 50 2020, HPC’s Readers’ and Editors’ Choice Awards and CBInsights AI 100 2020. The CS-1 is built around the world’s largest processor, the WSE, which is 56 times larger, has 54 times more cores, 450 times more on-chip memory, 5,788 times more memory bandwidth and 20,833 times more fabric bandwidth than the leading GPU competitor. Depending on workload, from AI to HPC, the CS-1 delivers hundreds or thousands of times more compute than legacy alternatives, and it does so at a fraction of the power draw and space.

For more information on the Cerebras and NETL workload, please visit the Cerebras blog.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The Cerebras CS-1 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE). The WSE is the largest chip ever built. It contains 1.2 trillion transistors, covers more than 46,225 square millimeters of silicon and contains 400,000 AI optimized compute cores. The largest graphics processor on the market has 54 billion transistors and covers 826 square millimeters and has only 6,912 cores. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras WSE.

Contacts

Kim Ziesemer
Email: pr@zmcommunications.com