Qualcomm-backed startup announces AI processor family

Qualcomm-backed startup announces AI processor family

Technology News |
By Peter Clarke

The three NPUs cover ultra-low power and high performance options with a ‘goldilocks’ standard processor as a standard offering in the middle. These are the KDP 300 ultra-low power version, the KDP 500 standard version, and the KDP 700 high-performance version, supporting various AI applications in smart home, smart surveillance, smartphones, and IoT devices.

Unlike the other AI processors on the market that often consume several watts, Kneron NPU Series consumes under 0.5W, and the KDP 300 designed for facial recognition in smartphones is less than 5mW. To achieve this figure the convolutional neural network (CNN) slice size is 150 by 150 with processing rate of 5 frames per second and a clock frequency of 20MHz.

The KDP 300 supports faster and a more accurate 3D live facial recognition through image analysis from 3D structured light and dual-lens cameras.

The KDP 500 is beefed up to do real-time recognition, analysis, and deep learning for multiple faces, hand and body gestures, which makes it suitable for smart home and smart surveillance applications. Its computing capacity is up to 152 GOPS at 500MHz while consuming 100mW.

The KDP 700 supports more advanced and complex AI computing, as well as deep learning inference for high-end smartphones, robots, drones, and smart surveillance devices. It is in development stage and expected to offer peak throughput up to 4.4 TOPS at 1GHz clock frequency while keeping the power consumption at between 300mW and 500mW.

What remains unclear is whether these will be offered as packaged components or licensable IP, or both. And what datatypes are supported.

Next: Decomposition, compression and caching

Kneron said it has adopted a filter decomposition technology that allows the division of large-scale convolutional computing blocks into a number of smaller ones to compute in parallel. Together with the reconfigurable convolution accelerating technology, the computing results from the small blocks will be integrated to achieve better overall computing performance. Model compression allows unoptimized models to be shrunk a few dozen times. The multi-level caching technique reduces the use of CPU resources and further improves the overall operational efficiency.

Kneron NPU IP Series allows ResNet, YOLO and other deep learning networks to run on edge devices including hardware IP, compiler, and model compression. It supports various types of  CNNs such as Resnet-18, Resnet-34, Vgg16, GoogleNet, and Lenet, as well as mainstream deep learning frameworks, including Caffe, Keras, and TensorFlow.

Albert Liu, Kneron´s founder and CEO said: “Since the release of its first NPU IP in 2016, Kneron has been making continuous efforts to optimize its NPU design and specifications for various industrial applications. We are pleased to introduce the new NPU IP Series and to announce that the KDP 500 will be adopted by our customer and enter to the mask tape-out process in the upcoming second quarter.”

Kneron was founded in 2015 and completed a Series A financing round worth more than $10 million in November 2017. Alibaba Entrepreneurs Fund and CDIB Capital Group are the lead investors, and Himax Technologies, Qualcomm, Thundersoft, Sequoia Capital and Cy Zone are co-investors.

Related links and articles:

News articles:

ST preps second neural network IC

Intel launches self-learning processor

ARM launches two machine learning processors

Eta raises funds for spiking neural networks

Ceva goes non-DSP with neural processor

BrainChip founder to focus on ‘Akida’ chip

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles