000 days 00 hours 00 minutes 00 seconds


Join Cerebras and Colovore for drinks and tacos and see the world’s most powerful accelerators in Silicon Valley’s most efficient datacenter!


Date: Monday, September 12, 2022
Time: 5:30 - 8:00 PM PT
Location: Colovore, 1101 Space Park Drive, Santa Clara, CA 95054


speaker

Andy Hock

VP Product Management


Cerebras Session time: Sept 14, 2022 @ 12:10 PM PT

Cerebras Session Title: Massive Natural Language Model Processing

Cerebras Session description: Cerebras Systems builds the fastest AI accelerators in the industry. In this talk we will review how the size and scope of massive natural language processing (NLP) presents fundamental challenges to legacy compute and to traditional cloud providers. We will explore the importance of guaranteed node to node latency in large clusters, how that can’t be achieved in the cloud, and how it prevents linear and even deterministic scaling. We will examine the complexity of distributing NLP models over hundreds or thousands of GPUs and show how quickly and easily a cluster of Cerebras CS-2s is set up, and how linear scaling can be achieved over millions of compute cores with Cerebras technology. And finally, we will show how innovative customers are using clusters of Cerebras CS-2s to train large language models in order to solve both basic and applied scientific challenges, including understanding the COVID-19 replication mechanism, epigenetic language modelling for drug discovery, and in the development of clean energy. This enables researchers to test ideas that may otherwise languish for lack of resources and, ultimately, reduces the cost of curiosity.

blog

Cerebras Sets Record for Largest AI Models Ever Trained on Single Device

Our customers can easily train and reconfigure GPT-3 and GPT-J language models with up to 20 billion parameters on a single CS-2 system

Read
Blog

Cerebras Makes It Easy to Harness the Predictive Power of GPT-J

A look at why this open-source language model is so popular, how it works and how simple it is to train on a single Cerebras system.

Read
Blog

Context is Everything: Why Maximum Sequence Length Matters

GPU-Impossible™ sequence lengths on Cerebras systems may enable breakthroughs in Natural Language Understanding, drug discovery and genomics.

Read