Nvidia is fighting back in high performance supercomputers, detailing its ninth generation graphics processing unit called Hopper that can be used for exascale (1000PFLOPs) machine learning systems.
At the Hot Chips conference in California, the company has shown more details of its Hopper GPU and how it can be used to build exascale AI supercomputers.
This follows the commissioning of the first US exascale machine, called Frontier, at ORNL. This uses AMD processors and GPUs. Nvidia has been working with European chip designer, SiPearl, on exascale computer designs.
- ORNL details first exascale supercomputer
- AMD details GPU for exascale computing
- SiPearl confirms Nvidia supercomputer deal
The Nvidia approach would see 32 DGX H100 nodes, each with 256 H100 Hopper GPUs connected by 164 NVLink4 NVSwitch chips with 70.4 TB/s bisection bandwidth. This would give 1ExaFLOP of peak AI compute, says Jack Choquette, CPU and System Architect and Designer at Nvidia who was also one of the lead chip designers on the Nintendo64 console early in his career.
A new Transformer Engine and upgraded Tensor Cores in Hopper deliver a 30x speedup compared to the prior generation on AI inference with the world’s largest neural network models. The chip also uses the world’s first HBM3 memory system to deliver 3Tbytes of memory bandwidth.
There are more details in a white paper on the Hopper GPU.
Other exascale supercomputer articles
- Europe aims for open source exascale supercomputers
- SiPearl in massive expansion for exascale chip design
- SiPearl, Intel team for supercomputer GPU
Other articles on eeNews Europe
- 48 core neuromorphic AI chip uses resistive memory
- Broadcom, Tencent to commercialise co-packaged 25Tbit optical switch
- UK commits £100m to self driving cars on its roads by 2025
- Microchip to develop next generation 12 core RISC-V space processor for NASA
- First RISC-V processor starts operation in orbit
- UK blocks purchase of Pulsic on security grounds