Low power breakthrough for edge AI chips

Low power breakthrough for edge AI chips

Technology News |
By Nick Flaherty

Imec in Belgium has developed a test chip using a new technique that dramatically reduces the power of machine learning edge AI systems.

The Analog in Memory Computing (AiMC) architecture uses modified memory cells to process data in trained neural networks for AI at the edge of the network with a power efficiency of 2900TOPS/W.

“We have built a special compute cell  where you are saving energy by reducing the digital transfers,” said Diederik Verkest, program director for machine learning at imec.  “Depending on the pulse width on the activation line [of the cell] you get summation of the weights on the [analog to digital converter] ADC before continuing with digital computations,” he said.

“In this chip we work with 3-levels weights. A weight can be -1, 0, or 1 and we use two SRAM cells to store this weight level. The compute-cell is an analog circuit with a few additional transistors on top of these two SRAM cells,” he said. “This produces an analog signal proportional to the multiplication of the 3-level weight stored and the activation signal (the output of the DAC). So
strictly speaking, the 3-level weight is stored in a digital fashion but all the computation is done in the analog domain.”

“The successful tape-out of AnIA marks an important step forward toward validation of Analog in Memory Computing (AiMC),” he added. “The reference implementation not only shows that analog in-memory calculations are possible in practice, but also that they achieve an energy efficiency ten to hundred times better than digital accelerators. From our perspective this was a milestone in the machine learning programme to show that an analogue computation can have the same accuracy as digital computation”

The Analog Inference Accelerator (AnIA) test chip has been built on the 22nm FD-SOI low power process from Global Foundries at its fab in Dresden, Germany. The chip is 4mm2 with 1024 input signals and 512 outputs with similar performance to today’s graphics processing units (GPUs). It showed the same accuracy as a digital implementation to within 1 percent but had a power efficiency of 2900TOPS/W. The combination of low power and low cost opens up opportunities for edge AI image recognition and sensing in embedded hardware.

“Analogue compute is a phenomenal frontier as it allows you to reduce the data movement and this is going to become mainstream,” said Hiren Majmudar, vice president of product management for computing and wired infrastructure at GF.

Next: Edge AI test chip 

“This test chip is a critical step forward in demonstrating to the industry how 22FDX can significantly reduce the power consumption of energy-intensive AI and machine learning applications,” said Majmudar.

“We get the same performance as GPUs but with much greater energy efficiency and a 4mm2 chip,” said Verkest. “If you increase the size of the arrays you will also increase the performance levels and once you start approaching the area of GPUs the performance will be higher but the power will be comparable.”

“GlobalFoundries collaborated closely with imec to implement the new AnIA chip using our low-power, high-performance 22FDX platform,” said. Majmudar

GF will include AiMC as a feature able to be implemented on the 22nm FD-SOI technology 22FDX with the new AiMC feature is in development at GF’s state-of-the-art 300mm production line at Fab 1 in Dresden, Germany.

“We are seeing partners of GF with validated silicon and we expect analogue compute silicon will be hitting production late this year and early next year and getting into mass market lo later than 2022 and possibly earlier than that,” he said.  

This version on GF used modified SRAM cells but the same technique can be used with other memory technologies. “You can use SRAM, MRAM, flash, DRAM, its part of the programme to understand which option is best,” said Verkest at imec.;


Related edge AI articles 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles