logo
logo
  • Product
    • System
    • Chip
    • Software
    • Cloud
  • Industries
    • Health & Pharma
    • Energy
    • Government
    • Scientific Computing
    • Financial Services
    • Web and Social Media
  • Resources
    • Customer Spotlights
    • Blog
    • Publications
    • Events
    • White Papers
  • Developers
    • Community
    • Developer Blog
    • Documentation
    • ML Public Repository
    • Request Access to SDK
  • Company
    • About
    • News
    • Press Releases
    • Awards
    • Press Kit
  • Join Us
    • Life at Cerebras
    • All openings
  • Get Demo
  • Search
October 18, 2021
In Chip, Machine Learning, System, Cloud, Blog

Scaling and Operationalizing AI in Government

Gil Haberman, Senior Director of Product Marketing Scaling and Operationalizing AI in Government   Today at AI World Government 2021, […]

Gil Haberman, Senior Director of Product Marketing

Scaling and Operationalizing AI in Government  

Today at AI World Government 2021, Andy Hock our VP of Product, shared our vision of how Cerebras Systems can empower government organizations to harness the power of AI and develop unprecedented capabilities in Defense, Energy, Climate, and Health Services.

While AI is in its infancy, many government organizations are already adopting sophisticated models to improve critical capabilities. For example, AI is used in smart language, signal, and image processing for science and security, AI-guided drug development, record analytics, and improved weather models.

However, public datasets are massive and groundbreaking applications require large models that take weeks or months to train. As models rapidly get larger this challenge significantly inhibits research progress, driving costly delay of new capabilities; the datasets are available but deep insights and key decisions are delayed or lost.

The reason is primarily that AI models are very compute intensive. Existing, general-purpose processors were not built for this work at large scale, so existing solutions are suitable but far from optimal. For large models, clusters of processors are stitched together in attempt to speed up performance.

Yet this approach doesn’t scale: For instance, looking at a prominent GPU MLPerf 1.0 result, we see that increasing device count to 512 only delivers 68x speed-up. As more devices are used, the inefficiency grows, including massive waste in energy. In an attempt to make the most of these clusters, researchers are forced to continually fine-tune their models, making the solution increasingly costly to program and operate. As a result, we need more compute per-device, and to find a way to reduce the reliance on scale-out data parallel architectures.

Source: Leading GPU MLPerf 1.0 results

We took on this challenge at Cerebras and delivered the largest chip ever built, the backbone of the fastest AI Accelerator in the world. Our latest CS-2 system packs the compute power of 850,000 AI-optimized cores on a single chip, along with orders of magnitude more memory and interconnect bandwidth than traditional systems. These purpose-built performance advantages allow the CS-2 to accelerate deep learning models far beyond what GPUs and CPUs are capable of.

A single CS-2 provides the wall-clock compute performance of an entire cluster of GPUs, made up of dozens to hundreds of individual processors, at a fraction of the space and power. For government organizations, this means faster insights at lower cost. For the ML researcher, this translates to achieving cluster-scale performance with the programming ease of a single device.

For example, we have been working with U.S. Department of Energy’s Argonne National Laboratory (ANL) on accelerating drug and cancer research. As Rick Stevens, ANL’s Chief Scientist, put it: “We are doing in a few months what would normally take a drug development process years to do. Computing power was almost equivalent to that of a cluster of computers with up to 300 GPU chips.” The massive acceleration of the Cerebras system empowers researchers to innovate much faster and drives breakthroughs that would otherwise not be discovered.

Do you think we can help your organization achieve similar outcomes? Want to learn more about our approach? Click here to connect with our team!

 

 

AI Research Projects Deep learning

Gil Haberman

Gil Haberman was a Product Marketing Director at Cerebras.

Author posts
Related Posts
Machine LearningSoftwareBlog

June 22, 2022

Cerebras Sets Record for Largest AI Models Ever Trained on Single Device

Our customers can easily train and reconfigure GPT-3 and GPT-J language models…


by Joel Hestness

Machine LearningSoftwareBlogDeveloper Blog

June 22, 2022

Training Multi-Billion-Parameter Models on a Single Cerebras System is Easy

Changing model size is trivial on Cerebras, rather than a major science project…


by Natalia Vassilieva

Machine LearningSoftwareBlog

June 22, 2022

Cerebras Makes It Easy to Harness the Predictive Power of GPT-J

A look at why this open-source language model is so popular, how it works and…


by Natalia Vassilieva

  • Prev
  • Next

Explore more ideas in less time. Reduce the cost of curiosity.

Sign up

info@cerebras.net

1237 E. Arques Ave
Sunnyvale, CA 94085

Follow

Product

System
Chip
Software
Cloud

Industries

Health & Pharma
Energy
Government
Scientific Computing
Financial Services
Web & Social Media

Resources

Customer Spotlight
Blog
Publications
Events
Whitepapers

Developers

Community
Developer Blog
Documentation
ML Public Repository
Request Access to SDK

Company

About Cerebras
News
Press Releases
Privacy
Legal
Careers
Contact

© 2022 Cerebras. All rights reserved

Privacy Preference Center

Privacy Preferences

Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage vendors Read more about these purposes
View preferences
{title} {title} {title}