Software suite delivers 20x data processing speed boost

November 19, 2019 //By Julien Happich
data processing
Nvidia’s Magnum IO is a suite of software developed to help data scientists and AI and high performance computing researchers process massive amounts of data in minutes, rather than hours.

Optimized to eliminate storage and input/output bottlenecks, Magnum IO delivers up to 20x faster data processing for multi-server, multi-GPU computing nodes when working with massive datasets to carry out complex financial analysis, climate modeling and other HPC workloads.

The Magnum IO suite was developed in close collaboration with industry leaders in networking and storage, including DataDirect Networks, Excelero, IBM, Mellanox and WekaIO.

“Processing large amounts of collected or simulated data is at the heart of data-driven sciences like AI,” said Jensen Huang, founder and CEO of Nvidia. “As the scale and velocity of data grow exponentially, processing it has become one of data centers’ great challenges and costs.

“Extreme compute needs extreme I/O. Magnum IO delivers this by bringing Nvidia GPU acceleration, which has revolutionized computing, to I/O and storage. Now, AI researchers and data scientists can stop waiting on data and focus on doing their life’s work,” he said.

At the heart of Magnum IO is GPUDirect, which provides a path for data to bypass CPUs and travel on “open highways” offered by GPUs, storage and networking devices. Compatible with a wide range of communications interconnects and APIs, including Nvidia NVLink and NCCL, as well as OpenMPI and UCX, GPUDirect is composed of peer-to-peer and RDMA elements.

Its newest element is GPUDirect Storage, which enables researchers to bypass CPUs when accessing storage and quickly access data files for simulation, analysis or visualization.

Nvidia -

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.