
Nvidia details Jupiter AI supercomputer with 24,000 Grace Hopper superchips

Europe’s leading supercomputer will use a new quad Grace Hopper node with the GH200 combined ARM GPU when it is installed next year.
The Jupiter AI Supercomputer will have nearly 24,000 GH2 chips interconnected with the Nvidia Quantum-2 InfiniBand networking platform. This will make it the most powerful AI supercomputer in the world with over 90 exaFLOPS of performance says Nvidia.
The quad GH200 node architecture with 288 Arm Neoverse cores capable of achieving 16 petaflops of AI performance using up to 2.3 terabytes of high-speed memory. Four GH200s are networked through the NVLink connection.
The AI supercomputer is owned by the EuroHPC Joint Undertaking and contracted to Eviden and ParTec and will be hosted at the Forschungszentrum Jülich facility in Germany. It is being built in collaboration with Nvidia, ParTec, Eviden and European chip designer SiPearl.
The configuration is based on Eviden’s BullSequana XH3000 liquid-cooled architecture, with a booster module comprising close to 24,000 NVIDIA GH200 Superchips. This can deliver over 90 exaflops of performance for AI training — 45x more than Jülich’s previous JUWELS Booster system — and 1 exaflop for high performance computing (HPC) applications, while consuming 18.2MW of power.
The UK’s AI supercomputer, Isambard AI, is also planning to use the GH200 when it is installed in Bristol.
“At the heart of Jupiter is Nvidia’s accelerated computing platform, making it a groundbreaking system that will revolutionize scientific research,” said Thomas Lippert, director of the Jülich Supercomputing Centre. “JUPITER combines exascale AI and exascale HPC with the world’s best AI software ecosystem to boost the training of foundational models to new heights.”
Installation of the JUPITER system is expected in 2024.
