The next generation of consumer and industrial products, particularly those that are powered by batteries, will rely on a combination of embedded vision and AI to provide new and differentiating features, such as facial recognition under less than ideal conditions, or VR and AR applications. At the same time, developers wish to do as much processing as possible on the device to save on bandwidth and provide a faster, more intuitive experience for users.
The new Tensilica Vision Q6 DSP IP is designed to accomplish these general purpose video processing and AI tasks by providing up to 384 GMAC/sec peak processing performance on a tight power budget. The IP offers 1.5 times the vision and AI processing power of its P6 predecessor’s peak performance in the same footprint while being 1.25 times more power efficient. The jump in performance has been accomplished by using a completely new architecture, which uses a deeper 13-stage processing pipeline instead of the ten-stage pipeline of the P6. An enhanced instruction set results in the embedded vision applications/kernels being calculated in 20% fewer cycles. The Vision Q6 DSP can provide performance for applications that require 200 to 400 GMAC/sec. For greater AI performance, the Vision Q6 DSP can be partnered with the Vision C5 DSP.
The Tensilica Vision Q6 architecture also doubles the available memory bandwidth and adds other features such as separate master/slave AXI interfaces for data/instructions and multi-channel DMA to make memory use more efficient and reduce overheads. Software for the Q6 architecture is also compatible with that of its P6 predecessor to make migration easier. That software support also extends to over 1,500 OpenCV-based vision and OpenVX library functions.
The Vision Q6 DSP supports AI applications developed in the Caffe, TensorFlow and TensorFlowLite frameworks through the Tensilica Xtensa Neural Network Compiler (XNNC), which maps neural networks into optimised, executable code for the Vision Q6 DSP. The new IP also supports the Android Neural Network (ANN) API for on-device AI acceleration in Android-powered devices.