Low-power AI chip features quantized DNN engine

Low-power AI chip features quantized DNN engine

New Products |
By eeNews Europe

The prototype is part of a research project on “Updatable and Low Power AI-Edge LSI Technology Development” commissioned by the New Energy and Industrial Technology Development Organization (NEDO) of Japan.
Today’s edge computing devices are based on conventional, general-purpose GPUs. These processors are not generally capable of supporting the growing demand for AI-based processing requirements, such as image recognition and analysis, which need larger devices at higher cost due to increases in power consumption and heat generation. Such devices and their limited performance are not desirable for state-of-the-art AI processing.
In their place, Socionext has developed a proprietary architecture based on “quantized DNN technology” for reducing the parameter and activation bits required for deep learning. The result is improved performance of AI processing along with lower power consumption. The architecture incorporates bit reduction including 1-bit (binary) and 2-bit (ternary) in addition to the conventional 8-bit, as well as the company’s original parameter compression technology, enabling a large amount of computation with fewer resources and significantly less amounts of data.
In addition, Socionext says it has developed a novel on-chip memory technology that provides highly efficient data delivery, reducing the need for extensive large capacity on-chip or external memory typically required for deep learning.
These new technologies were integrated in the prototype AI chip which is reported to achieve object detection by “YOLO v3” at 30fps, while consuming less than 5W of power. This is 10 times more efficient than conventional, general-purpose GPUs, claims the company. The chip is also equipped with a high-performance, low-power Arm Cortex-A53 quad-core CPU. Unlike other “accelerator” chips, it can perform the entire AI processing without external processors.

To allow developers to perform original, low-bit “quantization-aware training” or “post-training quantization”, Socionext has also built a deep learning software development environment incorporating TensorFlow as the base framework. When used in combination with the new chip, users can choose and apply the optimal quantization technology to various neural networks and execute highly accurate processing.

The new chip is expected to bring the most advanced computer vision functionality into small form factor, low-power edge devices. Target applications include advanced driver assistance system (ADAS), security camera, and factory automation among others.
Socionext is currently conducting circuitry fine-tuning and performance optimization through prototype evaluation. The company will continue working on research and development with the partner companies towards the completion of the NEDO-commissioned project, to deliver the AI Edge LSI as the final product.

Socionext –

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles