Cutting power, increasing accuracy in memristor analog in-memory AI computing

Cutting power, increasing accuracy in memristor analog in-memory AI computing

Technology News |
By Nick Flaherty

Researchers in the US have developed a technique to improve the accuracy of analog in-memory computing for AI using TetraMem memristors for lower power consumption.

Analog in-memory computing with memristors is an effective method for modeling complex physical systems that are typically challenging for conventional computing architectures. However it struggles with reading noise and writing variability that restrict scalability, accuracy, and precision in high-performance computations.

This is a popular area of research to reduce the power requirements of AI.

The researchers at the University of Massachusetts Amherst have shown a circuit architecture and programming protocol that converts the analog computing result to digital at the last step and enables low-precision analog devices to perform high-precision computing.

This uses a weighted sum of multiple devices to represent one number, in which subsequently programmed devices are used to compensate for preceding programming errors. With a memristor system-on-chip, they showed high-precision implementations for multiple scientific computing tasks while maintaining a substantial power efficiency advantage over conventional digital approaches.

This can execute a large vector-matrix multiplication (VMM) within one computing cycle. After a computation step by an array, subarrays dynamically compensate for residual errors of the previously programmed array. This method was used experimentally to solve partial differential equations with remarkable precision (<10−15 error) and with higher energy efficiency than digital computing.

Qiangfei Xia, UMass Amherst professor of electrical and computer engineering, worked with researchers at the University of Southern California and TetraMem on the design.

The memristor controls the flow of electrical current in a circuit, while also storing the previous state, even when the power is turned off for lower power consumption. The memristor device can be programmed into multiple resistance levels, increasing the information density in one cell to further reduce power.

A memristor crossbar array can process this multilevel analog data in a massively parallel fashion, substantially accelerating matrix operation, the most frequently used but very power-hungry computation in neural networks. The computing is performed at the site of the device, rather than moving the data between memory and processing.

Previously, these researchers demonstrated that their memristor can complete low-precision computing tasks, like machine learning. Other applications have included analog signal processing, radiofrequency sensing, and hardware security.

“We demonstrate a new circuit architecture and programming protocol that can efficiently represent high-precision numbers using a weighted sum of multiple, relatively low-precision analog devices, such as memristors, with a greatly reduced overhead in circuitry, energy and latency compared with existing quantization approaches,” says Xia.

“The breakthrough for this particular paper is that we push the boundary further,” he adds. “This technology is not only good for low-precision, neural network computing, but it can also be good for high-precision, scientific computing.”

For the proof-of-principle demonstration, the memristor solved static and time-evolving partial differential equations, Navier-Stokes equations, and magnetohydrodynamics problems.

“We pushed ourselves out of our own comfort zone,” he says, expanding beyond the low-precision requirements of edge computing neural networks to high-precision scientific computing.

It took over a decade for the UMass Amherst team and collaborators to design a proper memristor device and build sizeable circuits and computer chips for analog in-memory computing.

“Our research in the past decade has made analog memristor a viable technology. It is time to move such a great technology into the semiconductor industry to benefit the broad AI hardware community,” said Xia.





If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles