Matlab accelerates deep learning applications on Nvidia chips

Matlab accelerates deep learning applications on Nvidia chips

New Products |
By Christoph Hammerschmidt

The software meets the increasing requirements for applications in the embedded and automotive sectors. The connection to TensorRT is done via the software GPU Coder, which generates optimized code for graphics chips; applications are mainly seen in the areas of deep learning, embedded vision and autonomous systems.

Matlab provides a complete workflow for rapid training, validation and deployment of deep learning models. Engineers can use GPU resources without additional programming, allowing them to focus on their applications instead of optimizing performance. New integration of Nvidia TensorRT and GPU Coder enables deep learning models developed in Matlab to run on Nvidia GPUs with high throughput and low latency. Internal benchmarks show that CUDA code generated by Matlab in combination with TensorRT Alexnet can deliver five times more power and VGG-16 1.25 times more power than the deep learning reference of the corresponding networks in TensorFlow.

Mathworks will showcase the new software at the GPU Technology Conference from October 9 to 11 in Munich.

More information:

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles