MENU

Deep learning processing unit delivers 135 GOPS/W on midrange FPGAs

Deep learning processing unit delivers 135 GOPS/W on midrange FPGAs

New Products |
By eeNews Europe



The Omnitek deep learning processing unit (DPU) employs a novel mathematical framework combining low-precision fixed point maths with floating point maths to achieve 135 GOPS/W at full 32-bit floating point accuracy when running the VGG-16 CNN in an Arria 10 GX 1150. Scalable across a wide range of Arria 10 GX and Stratix 10 GX devices, the DPU can be tuned for low cost or high performance in either embedded or data centre applications. The DPU is fully software programmable in C/C++ or Python using standard frameworks such as TensorFlow, enabling it to be configured for a wide range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 as well as custom models. No FPGA design expertise is required to do this.

“We are very excited to apply this unique innovation, resulting from our joint research program with Oxford University, to reducing the cost of a whole slew of AI-enabled applications, particularly in video and imaging where we have a rich library of highly optimised IP to complement the DPU and create complete systems on a chip”, commented Roger Fawcett, CEO at Omnitek.

Omnitek – www.Omnitek.tv

Related articles:

Convolutional Neural Network on FPGA beats all efficiency benchmarks

Xilinx matches FPGAs with AI on dedicated platforms

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s