There is the prospect that spin-orbit torque MRAM might get up to the correct speed-endurance trade-off to get inside core logic, said Yeric (see IMEC makes spin-orbit torque MRAM on 300mm silicon). This would allow the advent of something called "normally-off" computing. "It's a great change," said Yeric.
The ability to freeze computing processes on-chip, retain state while drawing no power and then resume, would have considerable consequences, Yeric said. "It would require a new processor architecture. I think we would be adding a new processor line; something that could address a different power envelope in the IoT space; working without batteries by using harvested energy. It would also be able to leverage the massive momentum in IoT," said Yeric.
But these things are not quite a done deal yet. Bringing different materials into the fab always requires care and can ramp adoption costs. This is one reason why there are also non-volatile memories based on more familiar materials such as ferroelectric memory based on hafnium oxide (see Dresden NVM startup raises funds), and ReRAM based on silicon-oxide (see Weebit silicon-oxide ReRAM headed to 28nm, AI). Both materials are used in fabs as insulators but researchers are discovering properties that can be used as memories and making good progress.
And what about the CeRAM being pioneered by Carlos Paz de Araujo, a professor at the University of Colorado, through his company Symetrix Corp. (Colorado Springs, Colo.). ARM has been working with Symetrix since about 2014. Yeric reckons the technology is still two or three years away from commercialization. "It has a chance on paper. It has the endurance, speed and energy but then a lot of non-volatile memories appeared to have a chance at this stage in their development."
Yeric made the point that the devil is always in the detail of progressing down in process node and up in integration, from bits to arrays to subsystems. "We hope to have something to report at the next ARM Research Summit," he added.
At this point we changed gear to discuss neuromorphic computing. ARM already has two machine learning processors that are on their way to customers (see ARM launches two machine learning processors). The ARM ML and ARM OD (object detection) were due to be available for licensing mid-2018. But we asked whether analog was ultimately the way to go?
"There are papers out there that suggest analog machine learning is going to be lower power but there are also lots of things to overcome; such as verification of circuits and variability and repeatability in the field," Yeric said. He also pointed out that something may offer a tremendous uplift in the depths of the compute kernel but that advantage can become "washed out" at the system level. This can make a significant change less than worthwhile.
A second factor is memory management and interface to other digital circuitry can become complex.
"The third issue is EDA. The EDA industry doesn't tend to speculate and that provides a chicken and egg problem. That's true in non-volatile memory, cryogenics and 3D design. So part of the research path is building miniature ecosystems to support potential technical directions."
Next: Not the same