Nvidia fights back in exascale computing

Nvidia fights back in exascale computing

Technology News |
By Nick Flaherty

Nvidia is fighting back in high performance supercomputers, detailing its ninth generation graphics processing unit called Hopper that can be used for exascale (1000PFLOPs) machine learning systems.

At the Hot Chips conference in California, the company has shown more details of its Hopper GPU and how it can be used to build exascale AI supercomputers.

This follows the commissioning of the first US exascale machine, called Frontier, at ORNL. This uses AMD processors and GPUs. Nvidia has been working with European chip designer, SiPearl, on exascale computer designs.

The Nvidia approach would see 32 DGX H100 nodes, each with 256 H100 Hopper GPUs connected by 164 NVLink4 NVSwitch chips with 70.4 TB/s bisection bandwidth. This would give 1ExaFLOP of peak AI compute, says Jack Choquette, CPU and System Architect and Designer at Nvidia who was also one of the lead chip designers on the Nintendo64 console early in his career.

A new Transformer Engine and upgraded Tensor Cores in Hopper deliver a 30x speedup compared to the prior generation on AI inference with the world’s largest neural network models. The chip also uses the world’s first HBM3 memory system to deliver 3Tbytes of memory bandwidth.

There are more details in a white paper on the Hopper GPU.

Other exascale supercomputer articles

Other articles on eeNews Europe


If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles