Hyperdimensional computing helps improved AI fit on PCM chip
The work shows a six-fold improvement in energy efficiency compared to system-level design implemented using 65nm CMOS technology and the approach shows potential for further improvements towards brain-like energy efficient computing, the researchers claim.
The approach reflects a trade-off between accuracy and energy efficiency but one that works well for such applications as pattern recognition and classfication. One advantage over artificial neural networks is that training can be done in a much more brain-like manner, with a single exposure to available data. ANNs tend to required iterative training sessions that are highly energy consumptive. The work is reported in Nature Electronics (see https://www.nature.com/articles/s41928-020-0410-3).
Hyperdimensional computing, is an emerging form of computing that parallels some key aspects of biological memory, perception and cognition. Many of these biological functions can be modelled by the mathematical properties of hyperdimensional vectors, holographic representation and pseudo-randomness. In these cases, the information may be encoded with vectors that may have more than 1,000 dimensions. In IBM’s research, vectors with 10,000 dimensions were used.
By applying the mathematics of hypervectors such computation can be applied to machine learning tasks, such as learning and classification.
Although the encoding is rich and vector operations mathematically intensive, operations can be performed using in-memory computation. The approach can be applied to sophisticated tasks such as object detection, language and object recognition, voice and video classification, time series analysis, text categorization, and analytical reasoning.
One key benefit of hyperdimensional computing is that training is more efficient than conventional neural network approaches as object categories are learned in a single pass of available data. The approach is memory-centric and robust against noise, variations or failed components within the computing platform.
Next: There’s a video
The approach was tested by IBM on a prototype PCM chip in a 90nm process that includes 760,000 phase-change memory elements in a cross-bar arrays. It was applied to three tasks: language classification; news classification; and gesture recognition. The results were compared with the design of system-level hyperdimensional computing processor targeting 65nm CMOS, and showed a 6x advantage in energy efficiency and a 3.74× reduction in die area for in-memory computation.
The authors point out that further improvement are likely through improvements to the peripheral circuitry.
Related links and articles:
https://www.nature.com/articles/s41928-020-0410-3
Related links and articles:
IBM uses phase-change memory for machine learning
SiOx ReRAM selected for neuromorphic research
Intel takes neural networks binary
Micron, Lam help raise Mythic’s Series B to $70 million
Processor-in-memory DRAM benchmarked on Xeon server
If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
eeNews on Google News