logo
logo
  • Product
    • System
    • Chip
    • Software
    • Cloud
  • Industries
    • Health & Pharma
    • Energy
    • Government
    • Scientific Computing
    • Financial Services
    • Web and Social Media
  • Resources
    • Customer Spotlights
    • Blog
    • Publications
    • Events
    • White Papers
  • Developers
    • Community
    • Developer Blog
    • Documentation
    • ML Public Repository
    • Request Access to SDK
  • Company
    • About
    • News
    • Press Releases
    • Awards
    • Press Kit
  • Join Us
    • Life at Cerebras
    • All openings
  • Get Demo
  • Search
March 14, 2022
In Press Release

nference Accelerates Self-Supervised Language Model Training with Cerebras CS-2 System

The ability to harness vast amounts of health data using advanced AI technology will lead to new discoveries and insights needed to improve patient care

SUNNYVALE, Calif. – Cerebras Systems, the pioneer in high performance artificial intelligence (AI) compute, and nference, an AI-driven health technology company, today announced a collaboration to accelerate natural language processing (NLP) for biomedical research and development by orders of magnitude with a Cerebras CS-2 system installed at the nference headquarters in Cambridge, Mass.

The vast amounts of health data that lie within patient records, scientific papers, medical imagery, and genomic databases could be critical to advancing health outcomes. Unfortunately, this information is nearly impossible for data scientists and machine learning (ML) researchers to access, as it exists in unstructured, siloed, and incompatible forms, forcing researchers to sift through it manually. While data accessibility is a fundamental challenge in healthcare today, newer AI architectures such as transformer models can assist by processing data from various sources, de-identifying it, and converting it into structured, usable intelligence.

nference uses transformer AI models to employ self-supervised learning from large volumes of unstructured data without labels, translating vast amounts of health data into information that can be used to discover insights and drive research. However, training large models is complex, computationally intensive, and time-consuming, often requiring large clusters of conventional processors. A key feature of the Cerebras CS-2 architecture is the ability for data scientists and machine learning researchers to use data of longer sequence lengths than is practical using smaller, conventional processors – this is particularly relevant to research being conducted at nference.

“nference was founded to help solve complex medical problems and improve health outcomes by unlocking insights contained within biomedical data while protecting individual patient privacy,” said Ajit Rajasekharan, Chief Technology Officer, nference. “Our solution uses transformer models to help researchers and clinicians make sense of siloed and inaccessible health data, leading to new discoveries and findings that can impact patient outcomes. With Cerebras’s powerful CS-2 system, we can train transformer models with much longer sequence lengths than we could before, enabling us to iterate more rapidly and build better, more insightful models.”

“AI is driving an exponential increase in demand for compute,” said Andy Hock, Vice President of Product, Cerebras Systems. “As we have recently demonstrated across multiple customers and published work, the Cerebras CS-2 is orders of magnitude faster than legacy alternatives. This orders-of-magnitude performance advantage comes from the Cerebras Wafer Scale Engine (WSE-2), the world’s largest and most powerful AI processor. The WSE-2 is purpose built with 850,000 AI-optimized cores to accelerate the models of today and unlock future models not practical or possible on legacy infrastructure. The partnership and our work with nference is a great example of this, where their team – equipped with a CS-2 – is pushing the boundaries of AI to accelerate biomedical research and discovery to improve health outcomes.”

The Cerebras CS-2 system delivers the deep learning compute performance of hundreds of graphics processing units in a cluster, with the programming ease and efficiency of a single system. Powered by the largest and fastest processor ever built – the 2.6 trillion transistor second generation Cerebras Wafer-Scale Engine (WSE-2) – the CS-2 delivers more AI-optimized compute cores, fast memory, and fabric bandwidth than any other deep learning processor in existence. A CS-2 delivers the wall-clock compute performance of many tens to hundreds of GPUs with the programming ease and efficiency of a single device.

With customers and partners in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI solutions to a growing roster of customers including GlaxoSmithKline, AstraZeneca, Tokyo Electron Devices, Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center, and the European Parallel Computing Centre (EPCC).

For more information about the Cerebras CS-2 system and its applications in health and pharma, please visit https://cerebrasstage.wpengine.com/industries/health-and-pharma/.


Rebecca Lewington

Technology Evangelist, Cerebras Systems

Author posts
Related Posts
Press Release

May 25, 2022

Leibniz Supercomputing Centre Accelerates AI Innovation in Bavaria with Next-Generation AI System from Cerebras Systems and Hewlett Packard Enterprise

Leibniz Supercomputing Centre’s new advanced AI system will enable researchers…


by Rebecca Lewington

Press Release

April 22, 2022

TotalEnergies Research & Technology USA Selects Cerebras Systems CS-2 to Accelerate Multi-Energy Research


by Rebecca Lewington

Press Release

April 13, 2022

Cerebras Systems Expands PyTorch Support, Delivers Capability for Giant Model Training

Expanded Software Platform Enables Developers to Seamlessly Scale Large…


by Rebecca Lewington

  • Prev
  • Next

Explore more ideas in less time. Reduce the cost of curiosity.

Sign up

info@cerebras.net

1237 E. Arques Ave
Sunnyvale, CA 94085

Follow

Product

System
Chip
Software
Cloud

Industries

Health & Pharma
Energy
Government
Scientific Computing
Financial Services
Web & Social Media

Resources

Customer Spotlight
Blog
Publications
Events
Whitepapers

Developers

Community
Developer Blog
Documentation
ML Public Repository
Request Access to SDK

Company

About Cerebras
News
Press Releases
Privacy
Legal
Careers
Contact

© 2022 Cerebras. All rights reserved

Privacy Preference Center

Privacy Preferences