Sensor fusion: A critical step on the road to autonomous vehicles: Page 4 of 5

April 11, 2016 // By Hannes Estl
Sensor fusion: A critical step on the road to autonomous vehicles
Many cars on the road today, and even more new cars in the show rooms, have some form of advanced driver assistance system (ADAS) based on sensors like cameras, radar, ultrasound, or LIDAR. However, it is not just the number or type of sensors that is important, but how you use them.

Finding the golden middle
Depending on the number and type of sensors used in a system, the scalability requirements for different car types and upgrade options, a mix of the two topologies can lead to an optimized scenario. Today many fusion systems use sensors with local processing for radar, LIDAR, and the front camera for machine vision.

A fully-distributed system can use existing sensor modules in combination with an object data fusion ECU. “Dumb” sensor modules for systems like surround view and rear-view cameras make video available to the driver – see figure 5. Many more ADAS functions can be integrated into a fusion system such as driver monitoring or a camera-monitoring system, but the principle of sensor fusion remains the same.

Figure 5: Finding the perfect mix of distributed and centralized processing.

Platform management, targeted car segments, flexibility and scalability are important economic factors that also play an important role when partitioning and designing a fusion system. The resulting system might not be the best case scenario for any given variant, but could be best when looked at from a platform and fleet perspective.

Who is the “viewer” of all this sensor data?
There are two aspects to an ADAS that we did not yet discuss: informational ADAS versus functional ADAS. The first is to extend the senses of the driver while he is still in full control of the car (for example, surround view, night vision). The second is machine vision, which allows the car to perceive its environment and make its own decisions and actions (automated emergency brake, lane keep assist).  Sensor fusion naturally allows those two worlds to converge.

With that comes the possibility of using the same sensor for a different purpose, but at the price of limiting the choices regarding best inter-module communication and location of processing. Take surround view as an example. Originally designed to give the driver a 360 degree field of view (FoV) through video feeds to a central display. Why not use the same cameras and apply machine vision to it? The rear camera can be used for back over protection or automated parking and the side cameras for blind spot detection/ warning and also automated parking.

Machine vision used alone does local processing in the sensor module and then sends object data or even commands through a simple low-bandwidth connection like CAN. However, this connection is insufficient for a full video stream. Compression of the video can certainly reduce the needed bandwidth, but not enough to get into the single megabit range and it comes with its own challenges.

With increasing resolutions, frame rates and number of exposures for high dynamic range (HDR), this becomes much more difficult. A high-bandwidth connection and no data processing in the camera module solves the problem for the video, but now processing needs to be added on the central ECU to run machine vision there. Lack of central processing power or thermal limitations can become the bottle neck for this solution.

While not technically impossible, using both processing in the sensor module and a high-bandwidth communication at the same time, it might not be beneficial from an overall system cost, power and mounting space perspective.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.