
Swiss startup launches mega-neuron vision processor
The company is a China-backed startup spun out of the Institute of Neuroinformatics at the University of Zurich in March 2017.
In January 2018 the company announced a $1.2 million seed funding round, led by Pre Angel Capital (Zhuji Jiawei). In November 2018 the company added to that $1.5 million from Baidu Ventures.
The DynapCNN is a 12 square millimeter chip fabricated in 22nm manufacturing process technology that houses overs 1 million spiking neurons suited for implementing convolutional neural networks. The company claims that the DynapCNN is the most power-efficient way of processing data generated by event-based and dynamic vision sensors.
The company also claims that the DynapCNN is 100 to 1,000 time more power efficient than other “state-of-the-art” approaches and delivers 10 times shorter latencies in real-time vision processing. This, in turn, opens up the prospect of long-life battery-operated equipment.
Computation in the DynapCNN is triggered directly by changes in the visual scene, without using a high-speed clock or frames, showing similarities to the approach used by image sensors from French startup Prophesee (see Chronocam changes name, raises $19 million) and subsequently discussed by Qualcomm.
aiCTX claims its continuous computation enables latencies of less than 5ms and at least a 10x improvement over current deep learning solutions for real-time vision processing.
Development kits for the DynapCNN processor will be available in 3Q19.
Related links and articles:
News articles:
Development board offered for event-based vision
Chronocam changes name, raises $19 million
Eta adds spiking neural network support to MCU
BrainChip launches spiking neural network SoC
