Image sensors to get AI in three-layer die stack

Image sensors to get AI in three-layer die stack

Technology News |
By Peter Clarke

This approach to image sensor construction would necessitate the use of through silicon vias (TSVs) to link at least two of the layers.

This next step was presented in an otherwise largely historical description of ST’s progress from CMOS image sensors to 3D environment sensing, provided in the imaging track at the SEMI organized MEMS & Imaging Sensors summit held in Grenoble, last week.

The presentation was given by Helene Wehbe-Alause, a senior engineer who had worked on a number of the developments, who was standing in for Laurent Malier, general manager of technology at ST.

Wehbe-Alause described three landmark developments in ST’s progress. The first was the development and adoption of capacitive deep-trench isolation for pixels back in 2008. The second was the use of two-layer bonding, which allowed a top wafer to be optimized for photonics and a bottom wafer to be optimized for movement of data and low-power processing. The third was the move from a rolling shutter to a global shutter. Rolling shutters allow higher frame speeds but produce curved artefacts when objects are moving in the field of view.

Wehbe-Alause characterized ST’s progress with reference to one of ST’s most successful image sensor products; the FlightSense line of SPAD-based time-of-flight proximity and ranging sensors. This have come to replaces simple IR proximity sensors in many applications as these could be affected by the colour.

Research started in 2006 and resulted first products in 2013. More than 132 phone models now have FlightSense solutions designed, Wehbe-Alause said, based on the hybrid bonding approach.

Next: Step

The next step in the progress would be to include a third layer in the hybrid stack. The top layer would be the image sensor and in the case of a visible sensor that could potentially be a back-side illuminated (BSI) device bonded face-to-face to an active device responsible for analog to digital conversion and data movement. The third layer would be an STM32 microcontroller able to run digital neural network software and create a highly autonomous sensor that can analyse scenes and communicate with other parts of the system only in accordance to application-specific criteria.

This has the advantage of reducing computing load at the host and reducing power consumption.

When asked if it would be appropriate to use an AI-specific processor in the third layer of the smart sensor Wehbe-Alause said that was possible. The solution she outlined was an “exploration” she said.

The use of an STM32 microcontroller brings the advantage of an already available developers’ interface for digital neural networks.

Related links and articles:

News articles:

STM32 neural-network developer toolbox

ST offers infrared time-of-flight sensor

IR time-of-flight proximity sensor opens up new smartphone user interactions

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles