
Autonomous cars learn to drive with foresight
An empty street, a row of parked cars at the edge: nothing to call for caution. But wait: Doesn’t a street open up in front, half covered by parking cars? Maybe I’d rather take my foot off the gas – who knows if someone will come from the side. When driving, we constantly encounter situations like this that make us take special care. Interpreting them correctly and drawing the right conclusions requires a lot of experience. In contrast, self-propelled cars sometimes behave like a student driver in his first hour. Scientists now want to educate them to adopt a more proactive driving style.
Computer scientist Professor Dr. Gall heads the Computer Vision working group at the University of Bonn, which, in cooperation with his colleagues from the Institute of Photogrammetry and the Autonomous Intelligent Systems working group, is researching a solution to this problem. At the International Conference on Computer Vision in Seoul (November 1), the scientists will present a first step towards this goal. “We have further developed an algorithm that completes and interprets lidar data,” he explains. “This enables the car to adapt to possible dangers at an early stage.
Lidar is a rotating laser mounted on the roof of most autonomous cars. Per revolution, the system records the distance to around 120,000 points around the vehicle. With the high quality of the data it generates, lidar is considered the “gold standard” for surround sensing technologies for autonomous vehicles.
The problem is that the distance between the measuring points increases as the distance increases. Even for a human it is therefore hardly possible to obtain a correct picture of his surroundings from a single lidar scan, i.e. the distance measurements of a single revolution. “A few years ago, the University of Karlsruhe (KIT) recorded large quantities of lidar data, a total of 43,000 scans,” explains Dr. Jens Behley of the Institute of Photogrammetry. “We have now taken sequences from several dozen scans and superimposed them.” The data obtained in this way also contains points that the sensor had only recorded when the car had already driven a few dozen meters further. Put simply, they show not only the present, but also the future.
These superimposed point clouds contain important information such as the geometry of the scene and the spatial extent of the objects it contains, which are not available in a single scan,” stresses Martin Garbade, who is currently doing his doctorate at the Bonn University’s Institute of Computer Science. “In addition, we have labeled each individual point in them – for example: There’s a sidewalk, there’s a pedestrian and there’s a motorcyclist back there.” The scientists fed their software with a data pair: a single lidar scan as input and the corresponding superimposition data including semantic information as desired output. They repeated this for several thousand such pairs.
“During this training phase, the algorithm learned to complete and interpret individual scans,” explains Gall. He was then able to plausibly supplement missing measured values and interpret what could be seen in the scans. This scene completion already works relatively well: About half of the missing data can complete the procedure correctly. The semantic interpretation – i.e. the conclusion as to which objects are hidden behind the measuring points – has not yet worked quite as well: here, the computer achieves a maximum hit rate of 18 percent.
However, the scientists still see this branch of research at the very beginning. “Until now, there has simply been a lack of extensive data sets with which corresponding artificial intelligence methods can be trained,” emphasizes Gall. Current research is thus closing a gap. The scientists are optimistic that they will be able to significantly increase the hit rate in semantic interpretation over the next few years. 50 percent think that Gall is quite realistic. Autonomous driving could thus gain considerably in quality.
Related articles:
Imaging radar chip delivers high-res 4D point cloud for car safety
Lidar sensor detects obstacles up to 250 meters away
AI-based development tool evaluates lidar data
