MENU

Processing-in-memory advance uses neural networks

Processing-in-memory advance uses neural networks

Technology News |
By Rich Pell



PIM is an emerging computing paradigm that merges the memory and processing unit and does its computations using the physical properties of the machine – no 1s or 0s needed to do the processing digitally. The new circuit, say the researchers, has the potential to increase PIM computing’s performance by orders of magnitude beyond its current theoretical capabilities.

Traditionally designed computers are built using a Von Neuman architecture. Part of this design separates the memory (where data is stored) and the processor (where the actual computing is performed).

“Computing challenges today are data-intensive,” says Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering. “We need to crunch tons of data, which creates a performance bottleneck at the interface of the processor and the memory.”

PIM computers aim to bypass this problem by merging the memory and the processing into one unit. Computing, say the researchers, especially computing for today’s machine-learning algorithms, is essentially an extremely complex series of additions and multiplications.

In a traditional digital central processing unit (CPU), this is done using transistors, which basically are voltage-controlled gates to either allow current to flow or not to flow. These two states represent 1 and 0, respectively. Using this digital – or binary – code, a CPU can do any and all of the arithmetic needed to make a computer work.

As an alternative, the researchers are working on resistive random-access memory PIM, or RRAM-PIM. While in a CPU bits are stored in a capacitor in a memory cell, RRAM-PIM computers rely on resistors. These resistors are both the memory and the processor.

As a result, says Zhang, “In resistive memory, you do not have to translate to digital, or binary. You can remain in the analog domain. If you need to add, you connect two currents. If you need to multiply, you can tweak the value of the resistor.”

At some point, the information does need to be translated into a digital format to interface with other technologies – i.e., converting the analog information into a digital format – which presents a bottleneck for RRAM-PIM circuits. Such frequent and energy-intensive analog-to-digital (A/D) conversions severely limit their performance. To address this challenge, the researchers introduced neural approximators.

“A neural approximator is built upon a neural network that can approximate arbitrary functions,” says Zhang.

Given any function at all, the neural approximator can perform the same function, but improve its efficiency. In this case, the researchers designed neural approximator circuits that could help clear the bottleneck.

In the RRAM-PIM architecture, once the resistors in a crossbar array have done their calculations, the answers are translated into a digital format. What that means in practice is adding up the results from each column of resistors on a circuit. Each column produces a partial result.

Each of those partial results, in turn, must then be converted into digital information in what is called an analog-to-digital conversion, or ADC. The conversion is energy intensive. The neural approximator makes the process more efficient.

Instead of adding each column one by one, the neural approximator circuit can perform multiple calculations – down columns, across columns or in whichever way is most efficient. This leads to fewer ADCs and increased computing efficiency.

The most important part of this work, say the researchers, was determining to what extent they could reduce the number of digital conversions happening along the outer edge of the circuit. They found that the neural approximator circuits increased efficiency as far as possible.

“No matter how many analog partial sums generated by the RRAM crossbar array columns – 18 or 64 or 128 – we just need one analog to digital conversion,” says Weidong Cao, a postdoctoral research associate in Zhang’s lab. “We used hardware implementation to achieve the theoretical low bound.”

Using their neural approximators, say the researchers, could eliminate one of the challenges to large-scale PIM computing – the bottleneck, proving that this new computing paradigm has potential to be much more powerful than the current framework suggests. Not just one or two times more powerful, but 10 or 100 times more so.

“Our tech enables us to get one step closer to this kind of computer,” says Zhang.

For more, see “Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals.”

Related articles:
Digital PIM architecture accelerates unsupervised ML
Samsung combines high-bandwidth memory with AI processing power
ReRAM firm Crossbar headed towards AI, 1X-nm
AI accelerator chip with embedded MRAM is production ready

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s