NCSA deploys Cerebras CS-2 AI system in new HOLL-I supercomputer

NCSA deploys Cerebras CS-2 AI system in new HOLL-I supercomputer

Technology News |
By Jean-Pierre Joosting

Cerebras Systems has announced that the National Center for Supercomputing Applications (NCSA) has deployed the Cerebras CS-2 system in their HOLL-I supercomputer for large scale AI.

“This system is unique in the AI computing space in that we will have multiple clusters at NCSA that address the various levels of AI and machine learning needs — Delta and HAL, our NVIDIA DGX, and now HOLL-I, consisting of the CS-2, as the crown jewel of our capabilities,” said Dr. Volodymyr Kindratenko, Director of the Center for Artificial Intelligence Innovation at NCSA. “Each system is at the correct scale for the various types of usage and all having access to our shared center-wide TAIGA filesystem eliminating delays and slowdowns caused by data migration as users move up the ladder of more intense machine learning computation.”

The Cerebras CS-2 is the world’s fastest AI system. It is powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2). The Cerebras WSE-2 delivers more AI optimized compute cores, more fast memory, and more fabric bandwidth than any other deep learning processor in existence. Purpose built for AI work, machine learning practitioners can write their models in the opensource frameworks of TensorFlow or PyTorch and without modification run the model on the Cerebras CS-2. With the CS-2 and Cerebras Software Language (CSoft), practitioners can seamlessly scale up from small models like BERT to the largest models in existence like GPT-3.

“We founded Cerebras Systems with the audacious goal to forever change the AI compute landscape,” said Andrew Feldman, CEO and Co-Founder, Cerebras Systems. “Not only are we seeking to accelerate AI workloads by orders of magnitude over what is possible on legacy hardware, but we also want to put this extraordinary capability in the hands of academics and researchers.”

Large models have demonstrated state of the art accuracy on many language processing and understanding tasks. Training these large models using GPU is challenging and time-consuming. Training from scratch on new datasets often takes weeks and 10s of megawatts of power on large clusters of legacy equipment. Moreover, as the size of the cluster grows, power, cost, and complexity grow exponentially. Programming clusters of graphics processing units requires rare skills, different machine learning frameworks, and specialized tools that require weeks of engineering time for each iteration.

The CS-2 was built to directly address these challenges — setting up even the largest model takes only a few minutes, and the CS-2 is faster than clusters of hundreds of graphics processing units. With less time spent in set up, configuration and training, the CS-2 enables users to explore more ideas in less time.

With customers in North America, Asia, Europe and the Middle East, Cerebras is delivering industry leading AI to a growing roster of customers in the enterprise, government, and high performance computing segments including GlaxoSmithKline, AstraZeneca, TotalEnergies, nference, Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center, Edinburgh Parallel Computing Centre (EPCC), and Tokyo Electron

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles