MENU

Nvidia software helps eliminate data bottlenecks for AI researchers

Nvidia software helps eliminate data bottlenecks for AI researchers

New Products |
By Rich Pell



Optimized to eliminate storage and input/output bottlenecks, Magnum IO delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive datasets to carry out complex financial analysis, climate modeling and other HPC workloads.

The Magnum IO suite was developed in close collaboration with industry leaders in networking and storage, including DataDirect Networks, Excelero, IBM, Mellanox and WekaIO.

“Processing large amounts of collected or simulated data is at the heart of data-driven sciences like AI,” said Jensen Huang, founder and CEO of Nvidia. “As the scale and velocity of data grow exponentially, processing it has become one of data centers’ great challenges and costs.

“Extreme compute needs extreme I/O. Magnum IO delivers this by bringing Nvidia GPU acceleration, which has revolutionized computing, to I/O and storage. Now, AI researchers and data scientists can stop waiting on data and focus on doing their life’s work,” he said.

At the heart of Magnum IO is GPUDirect, which provides a path for data to bypass CPUs and travel on “open highways” offered by GPUs, storage and networking devices. Compatible with a wide range of communications interconnects and APIs, including Nvidia NVLink and NCCL, as well as OpenMPI and UCX, GPUDirect is composed of peer-to-peer and RDMA elements.

Its newest element is GPUDirect Storage, which enables researchers to bypass CPUs when accessing storage and quickly access data files for simulation, analysis or visualization.

Nvidia – www.nvidia.com

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s