logo
logo
  • Product
    • Cloud
    • Cluster
      • Condor Galaxy 1 AI Supercomputer
      • Andromeda AI Supercomputer
    • System
      • CS-2 Virtual Product Tour
    • Processor
    • Software
  • Applications
    • Natural Language Processing
    • Computer Vision
    • High-Performance Computing
    • Industries
      • Health & Pharma
      • Energy
      • Government
      • Scientific Computing
      • Financial Services
      • Web and Social Media
  • Resources
    • Customer Spotlights
    • Blog
    • Publications
    • Events
    • White Papers
    • Model Lab
  • Developers
    • Community
    • Documentation
    • Cerebras Model Zoo
    • Request Access to SDK
  • Company
    • About
    • In the News
    • Awards
    • Press Releases
    • Press Kit
  • Join Us
    • Life at Cerebras
    • All openings
  • Contact Us
  • Search
March 14, 2022
In Press Release

nference Accelerates Self-Supervised Language Model Training with Cerebras CS-2 System

The ability to harness vast amounts of health data using advanced AI technology will lead to new discoveries and insights needed to improve patient care

Cerebras Press Kit

SUNNYVALE, Calif. – Cerebras Systems, the pioneer in high performance artificial intelligence (AI) compute, and nference, an AI-driven health technology company, today announced a collaboration to accelerate natural language processing (NLP) for biomedical research and development by orders of magnitude with a Cerebras CS-2 system installed at the nference headquarters in Cambridge, Mass.

The vast amounts of health data that lie within patient records, scientific papers, medical imagery, and genomic databases could be critical to advancing health outcomes. Unfortunately, this information is nearly impossible for data scientists and machine learning (ML) researchers to access, as it exists in unstructured, siloed, and incompatible forms, forcing researchers to sift through it manually. While data accessibility is a fundamental challenge in healthcare today, newer AI architectures such as transformer models can assist by processing data from various sources, de-identifying it, and converting it into structured, usable intelligence.

nference uses transformer AI models to employ self-supervised learning from large volumes of unstructured data without labels, translating vast amounts of health data into information that can be used to discover insights and drive research. However, training large models is complex, computationally intensive, and time-consuming, often requiring large clusters of conventional processors. A key feature of the Cerebras CS-2 architecture is the ability for data scientists and machine learning researchers to use data of longer sequence lengths than is practical using smaller, conventional processors – this is particularly relevant to research being conducted at nference.

“nference was founded to help solve complex medical problems and improve health outcomes by unlocking insights contained within biomedical data while protecting individual patient privacy,” said Ajit Rajasekharan, Chief Technology Officer, nference. “Our solution uses transformer models to help researchers and clinicians make sense of siloed and inaccessible health data, leading to new discoveries and findings that can impact patient outcomes. With Cerebras’s powerful CS-2 system, we can train transformer models with much longer sequence lengths than we could before, enabling us to iterate more rapidly and build better, more insightful models.”

“AI is driving an exponential increase in demand for compute,” said Andy Hock, Vice President of Product, Cerebras Systems. “As we have recently demonstrated across multiple customers and published work, the Cerebras CS-2 is orders of magnitude faster than legacy alternatives. This orders-of-magnitude performance advantage comes from the Cerebras Wafer Scale Engine (WSE-2), the world’s largest and most powerful AI processor. The WSE-2 is purpose built with 850,000 AI-optimized cores to accelerate the models of today and unlock future models not practical or possible on legacy infrastructure. The partnership and our work with nference is a great example of this, where their team – equipped with a CS-2 – is pushing the boundaries of AI to accelerate biomedical research and discovery to improve health outcomes.”

The Cerebras CS-2 system delivers the deep learning compute performance of hundreds of graphics processing units in a cluster, with the programming ease and efficiency of a single system. Powered by the largest and fastest processor ever built – the 2.6 trillion transistor second generation Cerebras Wafer-Scale Engine (WSE-2) – the CS-2 delivers more AI-optimized compute cores, fast memory, and fabric bandwidth than any other deep learning processor in existence. A CS-2 delivers the wall-clock compute performance of many tens to hundreds of GPUs with the programming ease and efficiency of a single device.

With customers and partners in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI solutions to a growing roster of customers including GlaxoSmithKline, AstraZeneca, Tokyo Electron Devices, Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center, and the European Parallel Computing Centre (EPCC).

For more information about the Cerebras CS-2 system and its applications in health and pharma, please visit https://cerebrasstage.wpengine.com/industries/health-and-pharma/.


Rebecca Lewington

Technology Evangelist, Cerebras Systems

Author posts
Related Posts
Press ReleaseIn the News

September 20, 2023

KAUST and Cerebras Named Gordon Bell Award Finalist for Solving Multi-Dimensional Seismic Processing at Record-Breaking Speeds

Run on Condor Galaxy 1 AI Supercomputer, seismic processing workloads achieve…


by Tin Hoang

Press ReleaseIn the News

September 12, 2023

Cerebras Systems Promotes Dhiraj Mallick to Chief Operating Officer

Cerebras Systems, the pioneer in accelerating generative AI, today announced it…


by Tin Hoang

Press ReleaseIn the News

August 29, 2023

Meet “Jais”, The World’s Most Advanced Arabic Large Language Model Open Sourced by G42’s Inception

Developed in partnership with MBZUAI, Jais was trained on the Condor Galaxy 1…


by Tin Hoang

  • Prev
  • Next

Explore more ideas in less time. Reduce the cost of curiosity.

Subscribe to our newsletter

info@cerebras.net

1237 E. Arques Ave
Sunnyvale, CA 94085

Follow

Product

Cluster
System
Chip
Software
Cloud

Applications

Natural Language Processing

Computer Vision

High Performance Computing

Industries

Health & Pharma
Energy
Government
Scientific Computing
Financial Services
Web & Social Media

Resources

Customer Spotlight
Blog
Publications
Event Replays
Whitepapers

Developers

Community
Developer Blog
Documentation
ML Public Repository
Request Access to SDK

Company

About Cerebras
In the News
Press Releases
Privacy
Legal
Careers
Contact

© 2023 Cerebras. All rights reserved

Privacy Preference Center

Privacy Preferences

Manage Cookie Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage vendors Read more about these purposes
View preferences
{title} {title} {title}