MENU

Neuronics creates efficient memory: Part 1

Neuronics creates efficient memory: Part 1

Feature articles |
By eeNews Europe



Neuroelectronics is in its infancy. Perhaps it is at a state of development that deserves a shorter neologism: neuronics. It started in earnest as far back as the 1970s, with Carver Mead at Caltech and his subthreshold MOSFET correspondences with neurons. And it might be the path to analog ultra-integration, rivaling Pentiums and Celerons.

In recent decades, neural computing has come into its own. Now it is largely associated with digital computing. However, a perusal of the earlier technical papers of the MIT Artificial Intelligence Lab and Carver Mead’s work shows that some very powerful computing possibilities exist with analog circuits.

Dennis Feucht, electronics engineer with Innovatia in Belize.

Some years ago, I had a conversation with Hans Moravec, head of the Mobile Robotics Lab at the Carnegie-Mellon University Field Robotics Center in Pittsburgh and one of the major pioneering contributors to mobile robotics. I advanced the argument that analog circuits have a higher information density than digital circuits, because the information content of the voltage at a node is the log2(SNR), or the log [base 2] of the signal-to-noise ratio, the number of bits of distinguishable voltage levels. Moravec countered by pointing out that, with digital encoding, redundancy can be squeezed out of digital data, so that the effective number of bits per node is higher than one.

Hmm. I had to think about that, and I am not so sure the same cannot be done with analog information. It too is encoded; think of color video. And why can it not be encoded to achieve the same information-compressing advantages? At some point in electronic design, one comes to the basic realization that digital signal processing and analog waveform processing are not really all that different.

If it can be maintained that analog has a higher information coding density than digital (more bits per node), then ultimately computing will be analog, somewhat like the brain seems to be. How it is to be implemented is the mystery. I would like to advance some ideas about this for brainstorming with you. These ideas are based on very impressive results obtained in robotics by another pioneering contributor, Jim Albus, who was head of the NIST robotics group when I knew him.

In the second part of this blog, we’ll look at the system that Albus was proposing. It was based on the idea of accessing memory locations in an unusual manner.

This article first appeared on Planet Analog, part of sister publication EE Times.

Related links and articles: 

Synaptic transistor learns as it switches

 

IBM unveils cognitive computing chips that mimic brain cells

Memristors ready for prime time


 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s