
Nvidia, Intel team on exascale AI in data centres
Nvidia has teamed up its H100 GPUs with Intel’s latest fourth generation Xeon processors for AI processing in data centres.
The CPUs will be used in Nvidia’s DGX H100 systems as well as in more than 60 servers featuring H100 GPUs from Nvidia partners around the world.
The 4th Gen Intel Xeon CPU support PCIe Gen 5, which can double the data transfer rates from the CPU to the GPU and networking. This allows for a greater density of GPUs and high-speed networking within each server with networking speeds up to 400 Gbit/s. The CPUs also supports high speed DDR5 memory and the CXL high speed interconnect.
- Chip makers back CXL 3.0 for data centre memory
- Intel works with 7 out of top 10 fabless companies, sees 18A test chip
- AMD pushes CXL, chiplets with latest data centre processor
Each DGX H100 system uses eight NVIDIA H100 GPUs, 10 NVIDIA ConnectX-7 network adapters and two 4th Gen Intel Xeon Scalable processors. The DGX H100 systems are the building blocks of an enterprise-ready, turnkey DGX SuperPOD, which delivers up to one exaflop of AI performance.
The ConnectX-7 adapters support PCIe Gen 5 and 400 Gbps per connection using Ethernet or InfiniBand, doubling networking throughput between servers and to storage. The adapters support advanced networking, storage and security offloads. ConnectX-7 reduces the number of cables and switch ports needed, saving 17% or more on electricity needed for the networking of large GPU-accelerated HPC and AI clusters and contributing to the better energy efficiency of these new servers.
GPU servers will be available from ASUS, Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo, QCT and Supermicro.