While the use of ReRAMs – sometimes known as memristors – to implement machine learning neural networks can be 1,000 times more efficient than running such networks in software on conventional von Neumann machines, the accuracy can be variable, due to random telegraph noise (RTN) and other variabilities. RTN is often associated with step-like transitions between two or more discrete voltage or current levels at random. It is not fully understood but is thought to be due to sudden releases of charges previously trapped at surface interfaces or related to surface contaminants.
In a study published in Nature Communications, engineers at University College London (UCL) found that the neural network accuracy could be improved by forming several sub-groups of neural networks and averaging their calculations, so that errors in individual networks could be cancelled out.
Adnan Mehonic and PhD student Dovydas Joksas and colleagues tested but filamentary and supposedly non-filamentary RRAMs and chalcogenide phase change memory. These included the following material systems: TaOx, Ta/HfO2, HfOx, Al2O3/TiO2, SiOx, TiO2/a-Si and PCM.
The research showed that use of “processing by committee” and averaging the results improved the accuracy, regardless of material or technology. It also worked for a number of different problems that may affect accuracy.
“We hoped that there might be more generic approaches that improve not the device-level, but the system-level behaviour, and we believe we found one,” said Mehonic in a statement. “Arranging the neural network into several smaller networks rather than one big network led to greater accuracy overall.”
Intrinsic Ltd is a startup formed by Professor Tony Kenyon and Adnan Mehonic, to commericalize a SiOx-based RRAM device.
Related links and articles: