Hewlett Packard Labs has developed a memristor capable of learning in neuromorphic AI.
The differentiable content addressable memory (dCAM) mimics the operation of associative learning memory in the brain, and HP Enterprises says it could lead to a novel computing paradigm, able of adapting itself while performing different tasks.
Crossbar arrays offer a compact implementation of an arbitrary analog matrix. By taking advantage of current-voltage physics, previous research has demonstrated that these devices can efficiently implement direct matrix vector multiplication, such as the Dot Product Engine (DPE) built with ReRAM memristor devices.
Digital content-addressable memories (CAMs) are already used in networking and security and are paired with RAMs to associate an input with another piece of information. This creates an associative memory.
Related memristor articles
An analog CAM (aCAM), also developed at the labs using ReRAM memristors, enables searching of an analog value that falls within a range, while still producing a digital output: HP discovers memristor mechanism (2011)
While the analog CAM has several important uses, it still cannot operate as a learning associative memory, nor implement neural network learning algorithms.
The research found a simple modification to the analog CAM circuit could produce a differentiable CAM (dCAM), which has a continuous relationship between input and output. This allows the new cell to be used for neural networks through backpropagation and other known techniques.
“We believe this is an important advancement that has a variety of disparate uses,” said lead researcher Giacomo Pedretti, who led a team from HP Labs with Hong Kong University and Forschungszentrum Jülich.
By programming the dCAM and sensing the analog output, an error can be computed, backpropagated and analog memristor conductance can be updated in order to compensate for such errors, increasing the memory performance and density.
This circuit can also perform Boolean satisfiability optimization, an NP-complete problem which is at the bases of important applications (such as formal verification of planning) , in an efficient way and with limited resource scaling with problem size. In this case, the dCAM stores the clauses, and the inputs that satisfy such clauses are learned thanks to stochastic gradient descent.
“We believe we are just scratching the surface for the potential of a dCAM,” said Pedretti.
For example, a dCAM could be paired with other neuromorphic blocks (such as a dot-product-engine) to develop fully differentiable computing systems able of learning complex tasks without prior knowledge or need for heavy programming. A hybrid memory which can learn to be more like a DPE, or a CAM could change its behaviour based on the workload.
Differentiable content addressable memory with memristors
Related articles
- Power breakthrough for neuromorphic AI
- European project for ‘brain-like’ AI
- 2D materials promise ultra-efficient neuromorphic computing
- Test chip for low power neuromorphic AI
Other articles on eeNews Europe
- Trilite, Dispelix team for AR smart glasses
- Durham helps Rockport Networks take on Nvidia
- TDK backs 60GHz mmWave startup
- Dyson ramps up move to robotics
- Britishvolt buys German battery maker
