MENU

DARPA invests $5.7 million in neural image processor research

DARPA invests $5.7 million in neural image processor research

Technology News |
By eeNews Europe



By using memristors as the neuron’s memory synapses — because they consume zero current when idle — the image-processing neural network also aims to consume 10,000 times less power than today.

Adaptive neural networks learn the features in an image, rather than memorize its pixel values, allowing simpler representations in memory — for instance, just two features, "round" and "red," might suffice to determine that a traffic light says stop.

Professor Wei Lu's neural network image processor will connect artificial neurons using a crossbar (lower left) of memristors with migrating oxygen vacancies (upper right) in tungsten oxide to adaptively change its synaptic connection strengths.

Professor Wei Lu’s neural network image processor will connect artificial neurons using a crossbar (lower left) of memristors with migrating oxygen vacancies (upper right) in tungsten oxide to adaptively change its synaptic connection strengths.

To detect such features, neurons are arrayed to input all the pixels in an image at once, then process them in layers with variable synapses between them — similar to the visual cortex of the brain. Learning an image proceeds by inputting it to the first layer, whereupon the middle layers self-organize an internal representation, with the last layer acting as an array of single feature detectors. In practice, the more an image feature is presented to the neural network during learning, the stronger the synaptic connections that detect that feature will become.

To test slightly different architectures, the University of Michigan researchers, led by professor Wei Lu, are designing two prototypes. The simpler one uses memristors to store the values of its synapses, but uses conventional connections between layers. The more complex architecture mimics the brain more closely by using the memristors themselves to process voltage spikes sent between layers.

University of Michigan professor Wei Lu is designing a neural network chip that processes images 1000-times faster than conventional computers.
(Source: University of Michigan)

University of Michigan professor Wei Lu is designing a neural network chip that processes images 1000-times faster than conventional computers.
(Source: University of Michigan)

In an interview with EE Times, Lu said:

Basically there are two approaches we are developing, one uses small local memistors to store the weights that are calculated using well known learning algorithms, with most of the computations performed in the neuron. The other approach is more dramatic because we use the memristor to do the learning directly in its synapses, which is a riskier approach because you need a large amount of memory and the algorithms are not well known.

Over the last eight years Lu’s group has developed two types of direct-learning algorithms for memristors — timing-based learning and weight-based learning.

University of Michigan professor Wei Lu (standing) works in the clean room with electrical engineering doctoral candidate, Siddharth Gaba.
(Source: Scott C. Soderberg, University of Michigan)

University of Michigan professor Wei Lu (standing) works in the clean room with electrical engineering doctoral candidate, Siddharth Gaba.
(Source: Scott C. Soderberg, University of Michigan)

"We stimulate the network with images and the network self-adapts allowing its weights to evolve until a single neuron responds to a specific feature of the image, after which we can use the network to determine if a particular feature is present in any image," said Lu.

Funding for the first year of the project is set at $1.3 million, with new infusions each year during the first phase, which ends in 30 months with a prototype that can extract features from any image. The second phase aims to add a classifier that takes the features detected and recognizes combinations of them as particular objects, such as detecting the difference between a friendly F-15 jet and an adversary’s MiG jet.

The layout of the memristor array (center) acting as the memory-synapses for the learning-neurons uses tungsten-oxide with vacancies that migrate when current flows thus changing a synapses strength.
(Source: University of Michigan)

The layout of the memristor array (center) acting as the memory-synapses for the learning-neurons uses tungsten-oxide with vacancies that migrate when current flows thus changing a synapses strength.
(Source: University of Michigan)

Wei Lu is also a cofounder of Crossbar Inc. (Santa Clara, Calif.), which uses migrating silver ions in amorphous silicon to create resistive random access memories (ReRAM). But for his DARPA contract, instead of silver, he is casting his memristors in tungsten oxide, which changes its resistance as oxygen vacancies migrate from one end of the memristor to the other — depending on which way the current is flowing — thus acting as a resistance-based memory element.

All the work is being performed under the DARPA program called Unconventional Processing of Signals for Intelligent Data Exploitation. Lu’s project is titled Sparse Adaptive Local Learning for Sensing and Analytics. His collaborators include fellow professors Zhengya Zhang and Michael Flynn, Los Alamos National Lab scientist Garrett Kenyon, and Portland State University professor Christof Teuscher.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s