
Magnetic components key to low-power neuromorphic computing
Neuromorphic computing, say the researchers, has powered the revolution in machine learning and artificial intelligence that has dominated the technology landscape in recent years. Instead of processing tasks one at a time, these smarter devices are meant to analyze huge amounts of data simultaneously.
Training the algorithms for neuromorphic computing, however, has led to a huge increase in energy usage to process the massive troves of data generated by today’s devices.
“Right now, the methods for training your neural networks are very energy-intensive,” says Jean Anne Incorvia, an assistant professor in the Cockrell School’s Department of Electrical and Computer Engineering. “What our work can do is help reduce the training effort and energy costs.”
Rather than look to traditional silicon-based computing solutions, the researchers instead focused on magnetic components. What they found, say the researchers, is new information about how the physics of magnetic components can cut energy costs and requirements of training algorithms for neural networks.
They discovered that spacing magnetic nanowires, acting as artificial neurons, in certain ways naturally increases the ability for the artificial neurons to compete against each other, with the most activated ones winning out. Achieving this effect – known as “lateral inhibition” – traditionally requires extra processing circuitry within computers, which increases costs and takes more energy and space.
According to the researchers, their method provides an energy reduction of 20 to 30 times the amount used by a standard back-propagation algorithm when performing the same learning tasks.
“The same way human brains contain neurons, new-era computers have artificial versions of these integral nerve cells,” say the researchers. “Lateral inhibition occurs when the neurons firing the fastest are able to prevent slower neurons from firing. In computing, this cuts down on energy use in processing data.”
This research focused on interactions between two magnetic neurons and initial results on interactions of multiple neurons. The next step, say the researchers, involves applying the findings to larger sets of multiple neurons as well as experimental verification of their findings.
For more, see “Maximized Lateral Inhibition in Paired Magnetic Domain Wall Racetracks for Neuromorphic Computing.”
Related articles:
New neural network training approach cuts energy use, time
Intel neuromorphic research system reaches 100 million neurons
Neuromorphic sensing and computing could be ‘magic bullet’
SpiNNaker neuromorphic supercomputer reaches one million cores
