MENU

Nvidia looks to 1kW for next generation GPU

Nvidia looks to 1kW for next generation GPU

Technology News |
By Nick Flaherty



Nvidia’s next generation GPU is set to consume up to 1000W of power says a leading customer.

The key for the Nvidia B100, the follow up to the H200, will be the power and thermal management, says Jeffrey Clarke,  chief operating officer and Vice Chairman of Dell Technologies.

“Demand continues to outpace GPU supply, though we are seeing H100 lead times improving,” said Clarke. “We are also seeing strong interest in orders for AI-optimized servers equipped with the next generation of AI GPUs, including the [Nvidia] H200 and the [AMD] MI300X. Most customers are still in the early stages of their AI journey, and they are very interested in what we are doing at Dell.”

“We have a product transition that was in front of us that we have to work on, H100, H200, to be specific, and we’re taking orders on the new stuff as well as converting current pipeline on the current product. Last year was basically the H100 show. This year, [there are] four different variants, and there’s a transition associated with that,” he said.

This includes the transition to the H200 Hopper GPU and to the B200, based on the Blackwell GPU.

“We’re excited about what’s happening with the H200 and its performance improvement. We’re excited about what happens at the B100 and the B200, and we think that’s where there’s actually another opportunity to distinguish engineering confidence,” he said.

“Our characterization in the thermal side, you really don’t need direct liquid cooling to get to the energy density of 1,000 watts per GPU. That happens next year with the B200.”

“The opportunity for us is really to showcase our engineering and how fast we can move and the work that we’ve done as an industry leader to bring our expertise to make liquid cooling perform at scale, whether that’s in fluid chemistry and performance, our interconnect work, the telemetry we are doing or the power management work we’re doing.”

“To create intelligence with generative AI and HPC applications, vast amounts of data must be efficiently processed at high speed using large, fast GPU memory,” said Ian Buck, vice president of hyperscale and HPC at Nvidia. “With the H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.”

The H200 is available in the Nvidia HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems.

www.nvidia.com; www.dell.com

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s