Quantum computing is one of the novel computational architectures being pursued but another which has become red hot in industry and is already being deployed is neuromorphics. Leti is working hard to try and provide added value for research partners in this area.
The progress with Moores' Law and parallelization in hardware architecture is now allowing a dramatic scaling up of neural networks to the point where the hardware can simulate networks with millions of neurons and billions of synapses. Up until now the selected architectures have been trained in the datacentre with huge datasets. Recurrent neural networks are effective for recognizing sequences such as speech while convolutions NNs use trainable convolution filters for image recognition, Reita explained.
Nonetheless the range of problems and architectural approaches is broad. In general, the more software-oriented the solution the less efficient it is but the more flexible and applicable. The more hardware-oriented the more efficient it is but the more case specific and the less flexible. "To recognize one person in a million you have to go to the cloud. But to tell the difference between a person and dog can be done small scale."
One of the approaches Leti is looking at is how to address problems hierarchically. So that the same problem can be solved on the smartphone, on the television or in the cloud, Reita said.
Leti is part of a joint program spanning a number of research institutes within CEA Tech, including Leti and List. List is the Laboratory for Integration of Systems and Technology. The program is looking at the physical implementation of neural networks on dedicated circuits and memories made using advance silicon manufacturing. So this links back to novel devices and 3D integration including CoolCube, ReRAM as synaptic elements and FDSOI and nanowires.
And by way of design support Leti has developed a neural network design platform called N2D2 which can produce software and synthesizable hardware.
N2D2 internal tool for neural network design and optimization.
In terms of neural network design Leti has turned towards spiking architectures. That is one where neurons and synapses are modelled against time and communicate by emitting a series of signal spikes. The neurons do not all fire at each propagation cycle (as happens with multi-layer perceptron networks) but fire when the neuron reaches a specific value or state. The spike then pushes the state of connected neurons higher or lower and there may be an inherent decay function. Various schemes can be used to encode a real-value number, either relying on the frequency of spikes, or the timing between spikes.
"We can have a fully asynchronous system so when it is inactive no power consumption," said Reita. "It's easier to save power in analog and spiking systems."
Next: Europe's NeuRAM3