logo
logo
  • Product
    • Cloud
    • Cluster
      • Andromeda
    • System
    • Processor
    • Software
  • Applications
    • Natural Language Processing
    • Computer Vision
    • High-Performance Computing
    • Industries
      • Health & Pharma
      • Energy
      • Government
      • Scientific Computing
      • Financial Services
      • Web and Social Media
  • Resources
    • Customer Spotlights
    • Blog
    • Publications
    • Events
    • White Papers
  • Developers
    • Community
    • Developer Blog
    • Documentation
    • Cerebras Model Zoo
    • Request Access to SDK
  • Company
    • About
    • In the News
    • Awards
    • Press Releases
    • Press Kit
  • Join Us
    • Life at Cerebras
    • All openings
  • Get Demo
  • Search
September 16, 2021
In Chip, Machine Learning, System, Cloud, Blog

Announcing Cerebras Cloud @ Cirrascale, Democratizing High-Performance AI Compute

Gil Haberman, Sr. Director of Product Marketing | September 16, 2021 Democratizing High-Performance AI Compute   Today, we are thrilled […]

Gil Haberman, Sr. Director of Product Marketing | September 16, 2021

Democratizing High-Performance AI Compute  

Today, we are thrilled to announce  the availability of Cerebras Cloud @ Cirrascale, delivering the world’s fastest AI accelerator as a cloud service! Nearly every day, we engage with Machine Learning (ML) scientists and engineers who are looking to push the frontiers of deep learning but find themselves constrained by long training times of existing offerings. In contrast, our solution has been built from the ground-up for AI. It delivers hundreds or thousands of times more performance than alternatives – enabling data scientists and ML practitioners to train and iterate on large, state of the art models in minutes or hours rather than days or weeks.

Many of our early commercial and government customers chose to deploy Cerebras’ systems directly into their on-premises data centers to accelerate cutting-edge R&D in areas such as drug discovery and natural language processing (NLP). Our new Cerebras Cloud offering with Cirrascale dramatically expands our reach to more organizations – ranging from innovative startups to Fortune 500 – bringing the unparalleled AI performance of Cerebras CS-2 system to more users. This is an important step in truly democratizing high-performance AI compute!

 

 

Dream Big, with the Most Powerful AI at Your Fingertips

In building the Cerebras CS-2, every design choice has been made to accelerate deep learning, reducing training times and inference latencies by orders of magnitude. The CS-2 features 850,000 AI optimized compute cores, 40GB of on-chip SRAM, 20 PB/s memory bandwidth and 220Pb/s interconnect, fed by 1.2 Tb/s of I/O across 12 100Gb Ethernet links.

Now, with Cerebras Cloud @ Cirrascale, this system is available right at your fingertips. Cerebras Cloud is available in weekly or monthly flat-rate allotments, and as you grow there are discounts offered for longer-term, predictable usage. In our experience, as users observe the blazing fast performance of the CS-2, ideas for new model and experiments emerge – such as training from scratch on domain-specific datasets, using more efficient sparse models, or experimenting with smaller batch sizes – resulting in better-performing models in production and accelerated pace of innovation.

Integrate with Your Environment, Reduce Operational Burden

Getting started with Cerebras Cloud is easy. Our Software Platform integrates with popular machine learning frameworks like TensorFlow and PyTorch, so you can use familiar tools to get started running models on the CS-2 right away. The Cerebras Graph Compiler automatically translates your neural network from your framework representation into a CS-2 executable, optimizing compute, memory, and communication to maximize utilization and performance.

This approach also aims to dramatically simplify daily operations. The CS-2 systems that power Cerebras Cloud deliver cluster-scale performance with the programming simplicity of a single node. Whether the model is large or small, our compiler optimizes execution to get the most out of the system. As a result, cluster orchestration, synchronization and model tuning are eliminated, letting you focus on innovation rather than cluster management overhead.

And for those of you with data stored in other cloud services, our friends at Cirrascale can easily integrate the Cerebras Cloud with your current cloud-based workflow to create a secure, multi-cloud solution. They will handle the setup and management, so you can focus on deep learning.

Want to learn more? Get started with a Cerebras Cloud @ Cirrascale now!

 

AI Research Projects Deep learning

Gil Haberman

Gil Haberman was a Product Marketing Director at Cerebras.

Author posts
Related Posts
Machine LearningSoftwareClusterBlog

March 28, 2023

Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models

Cerebras open sources seven GPT-3 models from 111 million to 13 billion…


Avatar photoby Nolan Dey

Machine LearningSoftwareBlogComputer Vision

March 22, 2023

Can Sparsity Make AI Models More Accurate?

Cerebras introduces Sparse-IFT, a technique that, through sparsification,…


Avatar photoby Sean Lie

Machine LearningSoftwareSystemBlog

March 21, 2023

Accelerating Large GPT Training with Sparse Pre-Training and Dense Fine-Tuning [Updated]

We have shown it is possible to reduce the training compute for large GPT…


Avatar photoby Vithursan Thangarasa

  • Prev
  • Next

Explore more ideas in less time. Reduce the cost of curiosity.

Sign up

info@cerebras.net

1237 E. Arques Ave
Sunnyvale, CA 94085

Follow

Product

Cluster
System
Chip
Software
Cloud

Applications

Natural Language Processing

Computer Vision

High Performance Computing

Industries

Health & Pharma
Energy
Government
Scientific Computing
Financial Services
Web & Social Media

Resources

Customer Spotlight
Blog
Publications
Event Replays
Whitepapers

Developers

Community
Developer Blog
Documentation
ML Public Repository
Request Access to SDK

Company

About Cerebras
In the News
Press Releases
Privacy
Legal
Careers
Contact

© 2023 Cerebras. All rights reserved

Privacy Preference Center

Privacy Preferences

Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage vendors Read more about these purposes
View preferences
{title} {title} {title}