Tensilica IP builds DSP specific to neural net computational model

May 02, 2017 // By Graham Prophet
Tensilica IP builds DSP specific to neural net computational model
Introducing its Tensilica (Cadence) Vision C5 DSP, the IP vendor emphasises that this is not an accelerator for neural net functions, but a digital signal processor structure that runs all the layers of a neural network. In a leading edge silicon process, it will pack 1 TeraMAC (TMAC)/sec computational capacity into less than 1mm² silicon area.

Cadence Design Systems positions this as the industry’s first standalone, self-contained neural network DSP IP core optimized for vision, radar/lidar and fused-sensor applications with high-availability neural network computational needs. Targeted for the automotive, surveillance, drone and mobile/wearable markets, the Vision C5 DSP offers 1TMAC/sec computational capacity to run all neural network computational task.

 

The contrast drawn with approaches using accelerators specific to the ‘primitives’ of neural net computing, is that that approach can involve moving large amounts of data between processing elements and memory, which is power-hungry. By building an architecture that places all the layered aspects of the neural net model in one place, there are great efficiencies to be realised, Cadence asserts. C5 is, in effect, a general purpose computational machine, but shaped arount the core functions of neural net computing.

 

As neural networks get deeper and more complex, the computational requirements are increasing rapidly. Meanwhile, neural network architectures are changing regularly, with new networks appearing constantly and new applications and markets continuing to emerge. These trends are driving the need for a high-performance, general-purpose neural network processing solution for embedded systems that not only requires little power, but also is highly programmable for future-proof flexibility and lower risk.

 

Camera-based vision systems in automobiles, drones and security systems require (Cadence says) two fundamental types of vision-optimized computation. First, the input from the camera is enhanced using traditional computational photography/imaging algorithms. Second, neural-network-based recognition algorithms perform object detection and recognition. Existing neural network accelerator solutions are hardware accelerators attached to imaging DSPs, with the neural network code split between running some network layers on the DSP and offloading convolutional layers to the accelerator. This combination is inefficient and consumes unnecessary power.

 

Architected as a dedicated neural-network-optimized DSP, the Vision C5 DSP accelerates all neural network computational layers (convolution, fully connected, pooling and normalization), not just the convolution functions. This frees up the main vision/imaging DSP


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.