MENU

Kit provides algorithms for neuromorphic image sensor

Kit provides algorithms for neuromorphic image sensor

Technology News |
By Nick Flaherty



French neuromorphic AI chip developer Prophesee has launched a development kit to support its device for low power event-driven image processing.

Prophesee’s patented Metavision sensors and algorithms mimic how the human eye and brain work to dramatically improve efficiency in areas such as autonomous vehicles, industrial automation, IoT, security and surveillance, and AR/VR.

This neuromorphic approach needs more support for developers.

“The eye is not really sending an image to the brain, there’s no clock, its asynchronous and event based, so the retina reacts to motion to changes in the scene and our sensor is doing the same,” said said Luca Verre, CEO and co-founder of Prophesee.

“Each pixel is independent and asynchronous, and each pixel monitors the scene and reacts to changes of contrast. Very often this is due to the motion in the scene and generates an event, that carries the X,Y position of the pixel, the time of the change and the size of the change, and we are embedding analog processing that makes it smart.”

The current third generation VGA sensor has a 640 x 480 resolution on a 15um pixel pitch and is built using a 180nm process with the photodiode sitting with the analogue and logic in the die with a 30 percent fill factor.

The company signed a deal with Sony to develop the fourth generation sensor. This will use Sony’s 36nm sensor CMOS process for a 720p HD sensor with 1m pixels. This uses two stacked wafers, one with the pixels on a 4.86um pitch, and the other with the processing. This makes the die 10 times smaller and a fill factor of nearly 100 percent.

“The software will complement the sensor with the largest software development kit for event based vision,” said Verre. “This is a big milestone as for the first time we are offering the results of five years of work by over 20 engineers on over 100 projects for key customers in automotive, automation and mobile,” he said.

“The kit is for third generation sensor if customer has a commercial project but there’s nothing preventing them using it for the fourth generation sensor and we are doing field test for customers with that, with volume production at the end of 2021.”

The company combines the output of the sensor with AI neural network.

“We also use AI for certain applications,” said Verra. “We have implemented our model using a certain type of neural net, a recurring nerual net that is suitable for our type of data. The models in the kit are for detection and tracking of objects such as on a conveyor belt, or a specific shape or scratch on a surface. It can also be used for smart building or smart retail, for example where you need to detect and track people to open the door as they approach, so the model we provide is generic,” he said.

The sensor itself is more efficient as although it is always on it only responds to events so its lower power, depending on the changes in the scene, plus using less data means the power budget is lower.

“When we compare with state of the art frame-based tracking algorithms we get a benefit at the system level of 10 to 20x reduction in  power consumption depending on the activity in the scene,” said Verre. A fully static scene as a power consumption under 5mW, high activity is 10 to 20mW, compared to 100 to 200mW for a frame-based video detection running at 30 frame/s.

The Metavision Intelligence Suite has three components – Player, Designer and SDK – that are aimed at different stages of the design process and provide engineers and software developers with a means to easily iterate and customize designs using the chip.

The suite provides 62 algorithms, 54 code samples and 11 ready-to-use applications. It provides users with both C++ and Python APIs as well as extensive documentation and a wide range of samples organized by increasing difficulty to incrementally introduce the fundamental concepts of event-based machine vision.

“We understand the importance of enabling the development ecosystem around event-based vision technology. This software toolkit is meant to accelerate engineers’ ability to take advantage of its unique benefits without having to start from scratch,” said Verre. “The tools offer productivity and learning features that are valuable regardless of where a development team is on the adoption curve of event-based vision and will jumpstart design projects with production ready design aids.”

Metavision Player provides a Graphical User Interface that allows engineers to visualize and record data streamed by PROPHESEE-compatible Event-Based Vision systems as well as read provided event datasets.

Metavision Designer is a tool that allows engineers to interconnect components very easily for fast prototyping of Event-Based Vision applications. It consists of a rich set of libraries, Python APIs and code examples built for quick and efficient integration and testing.

The Metavision SDK provides access to the algorithmsvia APIs, ready to go to production with Event-Based Vision applications. The algorithms are coded in C++ and available via pre-compiled Windows and Linux binaries in its free license version.

The Metavision Intelligence suite is available in both time-unlimited free trial as well as a professional version, providing access to source code, advanced modules, revision updates, full documentation and support.

The fourth generation sensor is small enough to use in a mobile phone, says Verre. “In mobile, localisation mapping for augmented reality (AR) requires low latency understanding of the position of the device in space particularly for AR that needs precise positioning, and the robustness to lighting with 140dB range so you can handle lighting conditions outdoors,” he said.

www.prophesee.ai

Related neuromorphic articles 

Other articles on eeNews Europe 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s