MENU

Camera data compression with low latency

Camera data compression with low latency

Technology News |
By Christoph Hammerschmidt



Today, up to 12 cameras are installed in new vehicle models, mostly in the front or rear lights and side mirrors. An on-board computer installed in the car uses the data for ADAS functions such as lane assistant, parking aid or to identify other road users or obstacles. Benno Stabernack from the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI in Berlin predicts that if autonomous driving is being introduced as fast as currently predicted, the number of cameras will continue to increase.

This translates into even more load for the internal data networks of the vehicles. Currently, these networks can process a data volume of about one gigabit per second. With images in HD quality, this amount of data is already achieved with one camera. “The remedy here is compression,” says Stabernack. The Fraunhofer HHI earlier has made contributions to the development of the video coding standards H. 264/Advanced Video Coding (AVC) and H. 265/MPEG High Efficiency Video Coding (HEVC). “With these methods, the amount of data can be greatly reduced. This means that more than ten times the amount of data can be transferred,” Stabernack emphasizes.


Usually, 30 to 60 frames per second are sent from a camera to the central computer unit of the vehicle. The compression of the image data results in a latency time of typically five to six images, Stabernack explains. The reason for this is that the procedures compare an image with those already transferred in order to calculate the differences between the current image and its predecessors. Only the changes from image to image are then sent through the networks. This calculation takes a certain amount of time.

However, this delay can be of crucial importance in road traffic. In order to avoid latency, Stabernack and his team use only special mechanisms of the H. 264 coding process: the differences between individual images are no longer compared between images, but within an image. This reduces latency to a time equivalent of less than one frame, allowing the image data to be transmitted and processed almost in real-time. “This means that we can now also use the H. 264 process for cameras in vehicles,” says Stabernack. The technology was implemented at chip level. The device compresses the image data in the camera and decodes it in the on-board computer.

The Berlin researchers have patented their process and are selling their know-how to industry under license. Customers are automotive suppliers, the first vehicle models with Fraunhofer technology are already on the market. In the next step, the researchers want to transfer their method to the HVEC standard and incorporate their experience into the next standardization formats.

The HHI’s compression technology can be seen at Embedded World from 27 February to 1 March 2018 in Nuremberg (Hall 4, Stand 4-470).

Related articles:

 4K camera-on-chip for mirrors replacement applications

Video codec hardware IP doubles performance

Integrated SoCs target mid-range automotive infotainment systems

Video software stack receives GENIVI compliance certification

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s