Developing vision systems with dissimilar sensors

October 21, 2019 //By Tom Watzka
vision systems
Drones, intelligent cars and augmented- or virtual-reality (AR/VR) headsets all use multiple image sensors, often of different types, to capture data about their operating environment. To supply the image data the system needs, each sensor requires a connection to the system’s application processor (AP), which presents design challenges for embedded engineers.

The first challenge is that APs have a finite number of I/O ports available for connecting with sensors, so I/O ports must be carefully allocated to ensure all discrete components requiring a connection to the AP have one. Second, drone and AR/VR headsets have small form factors and use batteries for power. Therefore, components used in these applications must be as small and power efficient as possible.

One solution to the AP’s shortage of I/O ports is the use of Virtual Channels, as defined in the MIPI Camera Serial Interface-2 (CSI-2) specification. They can consolidate up to 16 different sensor streams into a single stream that can then be sent to the AP over just one I/O port.

The hardware platform of choice for a Virtual Channel implementation is the field-programmable gate array (FPGA). Alternative hardware platforms take a long time to design and may not have the low-power performance needed for applications like drones or AR/VR headsets. Some would argue that FPGAs have too large a footprint and consume too much power to be a feasible platform for Virtual Channel support. But advances in semiconductor design and manufacturing are enabling a new generation of smaller, more power-efficient FPGAs.

 

Situational overview

The growing demand among consumers for drones, intelligent cars, and AR/VR headsets are driving tremendous growth in the sensor market. Semico Research sees automotive (27% CAGR), drone (27% CAGR), and AR/VR headset (166% CAGR) applications as the primary demand drivers for sensors, and forecasts semiconductor OEMs will be shipping over 1.5 billion image sensors a year by 2022.

The applications mentioned above require multiple sensors to capture data about the application’s operating environment. For example, an intelligent car could use several high-definition image sensors for the rearview and surround cameras, a LiDAR sensor for object detection, and a radar sensor for blind-spot monitoring – see figure 1.


1. In today’s intelligent cars, sensors (radar/LiDAR,
image, time-of-flight, etc.) enable applications like
emergency braking, rearview cameras, and collision
avoidance.

This proliferation of sensors presents a problem as all of these sensors need to send data to the car’s AP, and the AP has a finite number of I/O ports available. More sensors also increase the density of wired connections to the AP on the device’s circuit board, which creates design footprint challenges in smaller devices like headsets.

One solution to the AP’s shortage of I/O ports is the use of Virtual Channels. Virtual Channels consolidate video streams from different sensors into a single stream that can be sent to the AP over a single I/O port. A popular current standard for connecting camera sensors to an AP is the MIPI Camera Serial Interface-2 (CSI-2) specification developed by the MIPI Alliance. CSI-2 can combine up to 16 different data streams into one by using the CSI-2 Virtual Channel function. However, combining streams from different images sensors into one video stream presents several challenges.

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.