MENU

How simple it is to fool AI algorithms for autonomous vehicles

How simple it is to fool AI algorithms for autonomous vehicles

Technology News |
By Christoph Hammerschmidt



A colour pattern on a T-shirt, a bumper sticker or an emblem on a shopping bag – all this could become a safety risk for autonomous cars. As a team of researchers at the University of Tübingen (Germany) has shown, the optical flow algorithms of deep neural networks are surprisingly easy to dupe. “It took us three, maybe four hours to create the pattern – it was very quick,” says Anurag Ranjan, PhD student in the Department of Perceptive Systems at the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen. He is the first author of the publication “Attacking Optical Flow”, a joint research project of the Department of Perceptive Systems and the Research Group for Autonomous Machine Vision at the MPI-IS and the University of Tübingen.

The danger that production vehicles currently available on the market are affected is low. Nevertheless, as a precaution, the researchers informed some car manufacturers who are currently developing self-driving models. They briefed them of the risk so that they could react promptly if necessary.

In their research, Anurag Ranjan and his colleagues tested the robustness of a number of different algorithms for determining the so-called optical flow. Such systems are used in self-driving cars as well as in robotics, medicine, video games and navigation, to name but a few. The optical flow describes the movement in a scene captured by the on-board cameras. Recent advances in machine learning have led to faster and better methods of calculating motion direction and speed. However, the research of the Tübingen scientists makes clear that such methods are susceptible to interference signals – for example, a simple, colourful pattern that is placed in the scene. Even if the pattern actually does not move at all, deep neural networks, which are used to a large extent in this application today, can lead to incorrect calculations: the network suddenly comes to the conclusion that parts of the scene are moving in the wrong direction.

The phenomenon is not new in itself. Researchers have shown several times in the past that even tiny patterns can confuse neural networks. For example, objects such as stop signs have been misclassified. This alone is a serious safety deficiency. Current research in Tübingen shows for the first time that algorithms for determining the movement of objects are also susceptible to such attacks. When used in safety-critical applications such as autonomous vehicles, however, these systems must be robust and safe with regard to such attacks.


Ranjan and his colleagues have been working on the project “attacking optical flow” since March last year. Even the experts were surprised by the result: they found out that even a small spot can cause great chaos. A size of less than 1% of the total image is sufficient to attack the system. The slightest disruption caused the system to make serious errors in its calculations, which affected half of the image area (shown in the picture on the right). The larger the spot, the more devastating the impact. “This is worrying because in many cases the flow system has erased the movement of objects throughout the scene,” explains Ranjan, pointing to a video showing the attacked system. It’s easy to imagine the damage a paralyzed autopilot of a self-driving car can cause at high speed.

How individual self-driving cars work is a well-kept secret of the respective manufacturers. Therefore, basic researchers in the field of computer vision can only speculate. “Our work is intended to wake up the manufacturers of self-driving technology and warn them of the potential threat. If they know about it, they can train their systems to be robust against such attacks,” says Michael J. Black, Director of the Department of Perceptive Systems at the Max Planck Institute for Intelligent Systems.

Perhaps just as important as the current “hacker attack” as such is that it shows automotive development teams how to develop better optical flow algorithms using zero flow testing. “If we show the system two identical images and there is no movement between them, the optical flow algorithm should not change color at all. However, this is often not the case, even without an attack. This is where the problems begin. This is where we have to start to fix what the net is doing wrong,” explains Ranjan. He and his team hope that their research will help raise awareness and that car manufacturers will take such attacks seriously and adapt their systems accordingly to make them less susceptible to interference.

The publication is available on arXiv and will be presented at the leading international conference in machine vision, the International Conference on Computer Vision ICCV, currently taking place in Seoul.

More information: https://arxiv.org/abs/1910.10053

Youtube video describing the effect: https://www.youtube.com/watch?v=FV-oH1aIdAI&feature=youtu.be

Related articles:

Deep Neural Networks – only in combination with traditional computer vision

What’s Next for Convolutional Neural Networks?

Continental massively expands commitment to AI

Machine learning for safety-critical functions? Kalray says yes

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s