“The kit is for third generation sensor if customer has a commercial project but there’s nothing preventing them using it for the fourth generation sensor and we are doing field test for customers with that, with volume production at the end of 2021.”
The company combines the output of the sensor with AI neural network.
“We also use AI for certain applications,” said Verra. “We have implemented our model using a certain type of neural net, a recurring nerual net that is suitable for our type of data. The models in the kit are for detection and tracking of objects such as on a conveyor belt, or a specific shape or scratch on a surface. It can also be used for smart building or smart retail, for example where you need to detect and track people to open the door as they approach, so the model we provide is generic,” he said.
The sensor itself is more efficient as although it is always on it only responds to events so its lower power, depending on the changes in the scene, plus using less data means the power budget is lower.
“When we compare with state of the art frame-based tracking algorithms we get a benefit at the system level of 10 to 20x reduction in power consumption depending on the activity in the scene,” said Verre. A fully static scene as a power consumption under 5mW, high activity is 10 to 20mW, compared to 100 to 200mW for a frame-based video detection running at 30 frame/s.
The Metavision Intelligence Suite has three components - Player, Designer and SDK – that are aimed at different stages of the design process and provide engineers and software developers with a means to easily iterate and customize designs using the chip.
The suite provides 62 algorithms, 54 code samples and 11 ready-to-use applications. It provides users with