Imagination launches flexible neural network IP

September 21, 2017 //By Peter Clarke
Imagination launches flexible neural network IP
Imagination has sought to take an earlier mover advantage in the market for neural networking cores with the PowerVR 2NX that provides flexible bit depth down to 4bits.

The 2NX is designed for deployment in "leaf node" applications such as mobile surveillance, automotive and consumer systems and is designed to directly complement GPU cores. In a 16nm process technology the 2NX will occupy from less than1 square millimeter up to a few square millimeter.

Both learning/training of convolutional neural networks, which is a lengthy process, and inferencing of specific examples have tended to be done in the cloud…but for reasons of latency and power efficiency many applications need inferencing at least to be done on the leaf nodes. For example collision avoidance for drones and automobiles.

Block diagram of the PowerVR 2NX neural network accelerator IP core. Source: Imagination.

"Dedicated hardware for neural network acceleration will become a standard IP block on future SoCs just as CPUs and GPUs have done," said Chris Longstaff, senior director of product and technology marketing, PowerVR, at Imagination, in a statement. "We are excited to bring to market the first full hardware accelerator to completely support a flexible approach to precision, enabling neural networks to be executed in the lowest power and bandwidth, whilst offering absolute performance and performance per square millimeter that outstrips competing solutions. The tools we provide will enable developers to get their networks up and running very quickly for a fast path to revenue."

Next: Suitable for mobile


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.