SynSense has launched application development kits for its Speck and Xylo neuromorphic vision and audio processing chips.
Speck is an event-driven neuromorphic vision SoC that uses dynamic vision sensing and the spiking neural network technology. It is a follow-on processor to the Dynap-CNN with which SynSense started in 2017 when it was founded as aiCTX GmbH in 2017.
Speck event-based image sensing technology with a 320,000-neuron processor to deliver real-time vision processing at milliwatt power consumption, the company said. SynSense provides an open-source software tool chain to enable the training and deployment of convolutional neural networks with up to nine layers.
Xylo is a low-power neuromorphic processor for low-dimensional signal processing and therefore is suited to processing audio streams. Audio is not limited to keyword detection, but is capable of detecting almost any audio feature. Application development is made easy with SynSense’s open-source Python library called Rockpool.
“At present, more than 100 industry customers, universities and research institutes are using SynSense neuromorphic boards and software,” said Dylan Muir, vice president of global R&D at SynSense, in a statement.
Muir added: “Before SynSense existed, designing, building and deploying an application to neuromorphic SNN [spiking neural network] hardware required a PhD level of expertise, and a PhD’s amount of time – three to four years. Now we have interns joining the company and deploying their first applications to SNN hardware after only one to two months. This is a huge leap forward for commercialisation, and a huge reward for the hard work of the company.”
“We expect more developers to join the neuromorphic community and make breakthroughs,” said Qiao Ning, founder and CEO of SynSense, in the same statement.
SynSense low power and low latency inference ASICs and IP blocks, as well as full-stack application development services, combining both sensing and computing.