
BrainChip CEO gives details of neural network architecture
The company’s business model was discussed by last month (Neural computing startup aspires to be the ARM of neuromorphic cores). The company’s technology is known as SNAP standing for Spiking Neuron Adaptive Processor.
One of the main differences between BrainChip’s implementations and some other neuromorphic processors implemented in both hardware and software is that Peter van der Made has attempted a closer modelling of biological neural networks; including the spike train method of data transfer and modelling of multiple modulations of signals at the synaptic connection.
"The number of neurons and synapses is configurable in the RTL. We could put as many as 10,000 neurons and five million synapses on a single die. These are neurons that behave like biological neurons with multiple spiking modes and dynamic, temporal integrating synapses," said Van der Made in email communication with eeNews Europe. He added: "The neurons and synapses are not multiplexed – unlike other designs like IBM’s TrueNorth which are multiplexed 256x and do not learn."
Peter van der Made, CTO and interim CEO of BrainChip Inc.
"The advantage of not multiplexing is that they are thousands of times faster, that all memory can be distributed, which simplifies the learning method. The learning method we use is STDP – Spike Time Dependent Plasticity, which constantly accesses memory," said Van der Made.
Resolution
The use of distributed memory located at the synapses means that SNAP64 is capable of updating neurons at a rate of millions per second and this has been taken up to 4Mupdates/s in an FPGA implementation, Van der Made said. The circuit implementation of SNAP64 is all-digital although the spikes are spatially and temporally distributed and asynchronous. The SNAP64 RTL has been implanted on a FPGA board from Dini Group La Jolla Inc. (La Jolla, Calif.) with multiple 20 million gate Xilinx FPGAs.
BrainChip Race Car Demonstration (Milestone 1) from Aziana Ltd. (now BrainChip Holdings Ltd.) on Vimeo.
"The SNAP64 architecture is designed to access 65536 (64k) neurons within the same chip, and chips can be stacked to a total of 2^48 = 256B neurons. SNAP64 is fully configurable; the neurotransmitter type and level, neuro-modulators, synaptic connections, and neuron type can be configured through a microprocessor interface. Alternatively, these parameters can be set in the RTL for a dedicated design," wrote van der Made.
Read the book
Van der Made also provided information on the resolution of potentials in the various parts of the neural network: "Synapses are at this time 18 bits wide, but there can be thousands of synapses contributing to the membrane potential of the neuron. The integrator in the dendrites is 22 bits wide, and the soma integrator is 24 bits wide. These component widths are easily configurable in the RTL if we need more or less resolution."
Finally he points out that it is necessary to communicate with the world external to the SNAP64. "To communicate with a computer we need labeled data. For that purpose we have incorporated sensory neurons, that take values in and output spikes, and motor neurons that take spikes in and output values."
Van der Made has written a book, published in 2012, containing a general introduction to neural network technology called Higher Intelligence. It is available from Amazon in print and electronic versions (see www.higherintelligencebook.com).
Applications for the SNAP64 technology include speech- and speaker-recognition, visual and image recognition, robotics, drones and automotive systems. BrainChip says on its website that it is currently focused on a set of applications that have been prioritized after consultation with potential partners in California. These applications are in the areas of smartphones, Internet of Things and robotics.
BrainChip: www.brainchipinc.com
Dinigroup: www.dinigroup.com
Related articles:
Neural computing startup aspires to be the ARM of neuromorphic cores
Startup plans neural network circuit for low-power wireless sensors
Startup’s pattern matching tech is inside Intel’s Quark neural net
