
Industry’s largest AI processor has 2.6 trillion transistors
Custom-built for AI work, the 7-nm-based Wafer Scale Engine 2 (WSE-2) is a single chip featuring 2.6 trillion transistors and 850,000 AI optimized cores. The WSE-2, says the company, has 2.55 trillion more transistors than the largest graphics processor unit (GPU) and 123x more cores and 1,000x more high-performance on-chip high memory than GPU competitors.
The WSE-2 will power the company’s CS-2 AI computer, offered as the industry’s fastest AI computer and which is designed and optimized for 7 nm and beyond. The CS-2 more than doubles the performance of company’s first-generation CS-1.
Manufactured by TSMC on its 7-nm-node, the WSE-2 more than doubles all performance characteristics on the chip – the transistor count, core count, memory, memory bandwidth and fabric bandwidth – over the first generation WSE. The result, says the company, is that on every performance metric, the WSE-2 is orders of magnitude larger and more performant than any competing GPU on the market.
“Less than two years ago, Cerebras revolutionized the industry with the introduction of WSE, the world’s first wafer scale processor,” says Dhiraj Mallik, Vice President Hardware Engineering, Cerebras Systems. “In AI compute, big chips are king, as they process information more quickly, producing answers in less time – and time is the enemy of progress in AI. The WSE-2 solves this major challenge as the industry’s fastest and largest AI processor ever made.”
With every component optimized for AI work, the CS-2, says the company, delivers more compute performance at less space and less power than any other system. Depending on workload, from AI to HPC, the CS-2 is claimed to deliver hundreds or thousands of times more performance than legacy alternatives, and at a fraction of the power draw and space.
A single CS-2, says the company, replaces clusters of hundreds or thousands of graphics processing units (GPUs) that consume dozens of racks, use hundreds of kilowatts of power, and take months to configure and program. At only 26 inches tall, the CS-2 fits in one-third of a standard data center rack.
The company says that its first-generation Cerebras WSE and CS-1 have been deployed by a variety of customers over the past year, including Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing centre at the University of Edinburgh, pharmaceutical leader GlaxoSmithKline, Tokyo Electron Devices, and more.
Related articles:
Giant trillion transistor chip built for AI
Cerebras Wafer Scale Engine: An Introduction
10 ‘coolest’ AI chip startups of 2020 – CRN
