Neural network accelerator scales up multi-core AI

December 05, 2018 // By Julien Happich
Imagination Technologies' latest neural network accelerator (NNA) architecture, the PowerVR Series3NX, was designed to enable SoC manufacturers to optimise compute power and performance across a range of embedded markets.

A single Series3NX core scales from 0.6 to 10 tera operations per second (TOPS), while multicore implementations can scale beyond 160 TOPS, claims the company.

Thanks to architectural enhancements, including lossless weight compression, the Series3NX architecture benefits from a 40% boost in performance in the same silicon area over the previous generation, giving SoC manufacturers a nearly 60% improvement in performance efficiency and a 35% reduction in bandwidth.

As part of the Series3NX architecture, Imagination is also announcing the PowerVR Series3NX-F (Flexible) IP configuration to balance functionality with flexibility. With Imagination’s dedicated DNN (Deep Neural Network) API, developers can easily write AI applications targeting Series3NX architecture as well as existing PowerVR GPUs. The API works across multiple SoC configurations for easy prototyping on existing devices. Imagination launched the previous generation of its NNA, the PowerVR Series2NX, in 2017. To date it has been licensed by multiple customers, predominantly focused in the mobile and automotive markets.

The PowerVR Series3NX is available for licensing now and PowerVR Series3NX-F will be available in Q1 2019.

Imagination Technologies – www.imgtec.com


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.