MENU

Large-scale integration of neuronics: Part 2

Large-scale integration of neuronics: Part 2

Feature articles |
By eeNews Europe



In the first part of this blog (Large-scale integration of neuronics: Part 1) we were looking at possible methodology for neuronic analog memory. We’ll continue that discussion and look at ways to mimic the biological version of the neuron.

Neural computing has been carried out almost exclusively on digital computers as simulations. While this is illuminating, it gives few clues toward analog implementation of memory. If we look into some of the schemes that have been developed by neural-net researchers, we can begin to see the possibilities for analog implementation. The mainstream schemes for the basic computing cell, the electronic neuron, implement one to three layers (or what we would call stages) of these cells with a relatively large number of them per stage.

The typical neuron is itself a two-stage unit. The first stage is linear and scales inputs, xi, according to the stored values, or weights, wi. Then the combined result, which can also be represented mathematically as the vector dot product of an input vector with a weight vector, is input to the second, nonlinear stage. This function can vary; it is often the logistics function, xo•(1 – xo) but is given here as the function implemented by a BJT differential-pair. It has been shown that successful learning techniques can be applied if the function has a continuous derivative.

The nonlinearity is that of a bounded function. The input has a theoretically infinite range, but the output is constrained to be within the range of ±1. For outputs from a stage of n neurons, the resulting output vector is constrained to be within an n-dimensional hypercube with vertices of (1, 0, 0, …, 0), (0, 1, 0, …, 0) to (0, 0, …, 1).

 

If we consider the above analog neuron to be our basic building block, then how can we implement memory needed for wi? It is not unreasonable to suppose that some kind of feedback will be necessary.

In analog circuits with bistable states, some form of positive feedback is always required. To write values of wi, it will be necessary to access the first neuron stage from another input than x to change w. Yet this does not directly address the problem of how the w values are to be sustained. A possibility to be explored is a set of neurons implementing positive feedback loops that affect only w values. For a nearly linear feedback system, the result is a sine-wave oscillator. For circuits with bounded outputs, such as the electronic neuron, distinct bistable or multistable vector states are possible.

This idea will be explored further in the next episode before returning to the problem of analog CMAC implementation.

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s