Deep learning processing unit delivers 135 GOPS/W on midrange FPGAs

April 17, 2019 //By Julien Happich
Deep learning processing unit
IP provider Omnitek has released a novel Convolutional Neural Network (CNN) optimised for the Intel Arria 10 GX architecture.

The Omnitek deep learning processing unit (DPU) employs a novel mathematical framework combining low-precision fixed point maths with floating point maths to achieve 135 GOPS/W at full 32-bit floating point accuracy when running the VGG-16 CNN in an Arria 10 GX 1150. Scalable across a wide range of Arria 10 GX and Stratix 10 GX devices, the DPU can be tuned for low cost or high performance in either embedded or data centre applications. The DPU is fully software programmable in C/C++ or Python using standard frameworks such as TensorFlow, enabling it to be configured for a wide range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 as well as custom models. No FPGA design expertise is required to do this.

“We are very excited to apply this unique innovation, resulting from our joint research program with Oxford University, to reducing the cost of a whole slew of AI-enabled applications, particularly in video and imaging where we have a rich library of highly optimised IP to complement the DPU and create complete systems on a chip”, commented Roger Fawcett, CEO at Omnitek.

Omnitek - www.Omnitek.tv

Related articles:

Convolutional Neural Network on FPGA beats all efficiency benchmarks

Xilinx matches FPGAs with AI on dedicated platforms

Picture: 
Deep learning

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.