Matlab accelerates deep learning applications on Nvidia chips

July 16, 2018 // By Christoph Hammerschmidt
Matlab, one of the most widely used tools in engineers' toolbox, has now received an integration option for Nvidia's inference optimization software TensorRT. This makes it easier for users to develop new models for AI and Deep Learning in Matlab. Matlab provider Mathworks promises five times faster deep learning inference on Nvidia GPUs compared to TensorFlow.

The software meets the increasing requirements for applications in the embedded and automotive sectors. The connection to TensorRT is done via the software GPU Coder , which generates optimized code for graphics chips; applications are mainly seen in the areas of deep learning, embedded vision and autonomous systems.

Matlab provides a complete workflow for rapid training, validation and deployment of deep learning models. Engineers can use GPU resources without additional programming, allowing them to focus on their applications instead of optimizing performance. New integration of Nvidia TensorRT and GPU Coder enables deep learning models developed in Matlab to run on Nvidia GPUs with high throughput and low latency. Internal benchmarks show that CUDA code generated by Matlab in combination with TensorRT Alexnet can deliver five times more power and VGG-16 1.25 times more power than the deep learning reference of the corresponding networks in TensorFlow.

Mathworks will showcase the new software at the GPU Technology Conference from October 9 to 11 in Munich.

More information:


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.