Processing-in-Memory supports AI acceleration at 8.8 TOPS/W

June 13, 2019 //By Julien Happich
Processing-in-Memory
Renesas Electronics announced it has developed a highly efficient AI accelerator that performs CNN (convolutional neural network) at low power, enabling increased intelligence in future endpoint devices with embedded AI (e-AI).

A test chip featuring this accelerator has achieved the power efficiency of 8.8 TOPS/W, claims Renesas who presented the technology in a paper titled “A Ternary Based Bit Scalable, 8.80 TOPS/W CNN Accelerator with Multi-Core Processing-in-Memory Architecture with896K Synapses/mm2” at the Symposia on VLSI Technology and Circuits in Kyoto, Japan. The Renesas accelerator is based on the processing-in-memory (PIM) architecture, an increasingly popular approach for AI technology, in which multiply-and-accumulate operations are performed in the memory circuit as data is read out from that memory.

To create the new AI accelerator, Renesas developed a ternary-valued (-1, 0, 1) SRAM structure PIM technology that can perform large-scale CNN computations. The SRAM circuit was then applied with comparators that can read out memory data at low power.

The company also implemented a novel technology that prevents calculation errors due to process variations in the manufacturing.

Combining these three approaches yields both a reduction in the memory access time in deep learning processing and a reduction in the power required for the multiply-and-accumulate operations, says Renesas, enabling the new accelerator to achieves the industry's highest class of power efficiency while maintaining an accuracy ratio more than 99 percent when evaluated in a handwritten character recognition test (MNIST).


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.