MENU

Prophesee looks to neuromorphic AI in smartphones

Prophesee looks to neuromorphic AI in smartphones

Business news |
By Nick Flaherty

Cette publication existe aussi en Français


Neuromorphic AI pioneer Prophesee has commercialised a reference design for its sensor and software in mobile phones.

The neuromorphic event-driven sensor, which is co-developed with Sony, is being used alongside a traditional camera to provide more data about the image. This can be used to deblur a picture of video, and improve the image quality, particularly in low light with longer exposure times.

The Metavision Image Deblur Solution for Smartphones is now a production-ready module that is optimized for the Snapdragon 8 Gen 3 Mobile Platform used by high end smartphone makers.

The neuromorphic sensor has 1m pixels and uses the motion data to optimise the setting of the image sensor for computational photography, Luca Verre, CEO and co-founder of Prophesee tells eeNews Europe in a demonstration at Mobile World Congress 2024.

“We have made significant progress since we announced this collaboration with Qualcomm in February 2023, achieving the technical milestones that demonstrate the impressive impact on image quality our event-based technology has in mobile devices containing Snapdragon mobile platforms. As a result, our Metavision Deblur solution has now reached production readiness,” said Verre.

Adding the neuromorphic, event driven sensor to a phone opens up other applications. One of these is the ability to upscale video quality to 60 frame/s and 4K by using the motion data.

“This is the first step with computational photography,” he said. “You can get blur at any time indoors with some motion, from a few pixels to 40 or 50 pixels. We have good results of deblurring to 100 pixels to we can get the residual blur down to 1 or 2 pixels in any conditions, but particularly in extended exposures in low light.”

Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron.

They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronizing a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.

Prophesee also has a fifth generation quarter VGA sensor manufactured by STMicroelectronics that can be used for eye tracking in smart glasses or to reduce the bandwidth of video in smart home and IoT applications as well as provide low power ‘always on capability’ to wake up the system when motion is detected.

“We are pursuing two paths, one to increase the resolution and the other is to decrease the size of the sensor with computational imaging on one side and XR on the other,” said Verre.

www.prophesee.ai/event-based-vision-mobile/

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s