Computer vision system helps robots, vehicles see around corners

Computer vision system helps robots, vehicles see around corners

Technology News |
By Rich Pell

The system works by sensing tiny changes in shadows on the ground to determine if there’s a moving object around a corner. Such a system could be used by autonomous vehicles to avoid a potential collision with another car or pedestrian emerging from around a building’s corner or from in between parked cars.

In the future, say the engineers, robots might also use the system when navigating hallways in buildings to avoid hitting people.

“For applications where robots are moving around environments with other moving objects or people,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, “our method can give the robot an early warning that somebody is coming around the corner, so [it] can slow down, adapt its path, and prepare in advance to avoid a collision. The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”

In a paper on their work, the researchers describe successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. When sensing and stopping for an approaching vehicle, say the researchers, the car-based system beats traditional LiDAR – which can only detect visible objects – by more than half a second.

The system is based on an earlier project, called “ShadowCam,” that uses computer-vision techniques to detect and classify changes to shadows on the ground. It uses sequences of video frames from a camera targeting a specific area, such as the floor in front of a corner, to detect changes in light intensity over time, from image to image, that may indicate something moving away or coming closer.

Some of those changes, say the researchers, may be difficult to detect – or even invisible – to the naked eye, and can be determined by various properties of the object and environment. ShadowCam computes that information and classifies each image as containing a stationary object or a dynamic, moving one; if it gets to a dynamic image, it reacts accordingly.

To adapt the system for autonomous vehicles required a few advances. For example, the original system relied on lining an area with augmented reality labels resembling simplified QR codes, which robots can then scan to detect and compute their precise 3D position and orientation relative to the tag. ShadowCam used the tags as features of the environment to zero in on specific patches of pixels that may contain shadows, but modifying real-world environments with such tags is not practical.

Instead, the researchers developed a process that combines image registration and a new visual-odometry technique where image registration essentially overlays multiple images to reveal variations in the images. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.

Used for Mars Rovers, visual-odometry estimates the motion of a camera in real-time by analyzing pose and geometry in sequences of images. The researchers specifically use a method of visual odometry – called Direct Sparse Odometry, or DSO – that can compute feature points in environments similar to those captured by the original system’s AR tags. Essentially, DSO plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner.

As ShadowCam takes input image sequences of a region of interest, it uses the DSO-image-registration method to overlay all the images from the same viewpoint of the robot. Even as a robot is moving, it’s able to zero in on the exact same patch of pixels where a shadow is located to help it detect any subtle deviations between images.

Pixels that may contain shadows get a boost in color that makes extremely weak signals from shadow changes far more detectable. If the boosted signal reaches a certain threshold — based partly on how much it deviates from other nearby shadows — ShadowCam classifies the image as “dynamic,” and depending on the strength of that signal, the system may tell the robot to slow down or stop.

“By detecting that signal, you can then be careful,” says Felix Naser, a former CSAIL researcher and first author of a paper on the research. “It may be a shadow of some person running from behind the corner or a parked car, so the autonomous car can slow down or stop completely.”

Currently the system has only been tested in indoor settings, where robotic speeds are much lower and lighting conditions are more consistent, making it easier for the system to sense and analyze shadows. In one test, the system’s performance was evaluated in classifying moving or stationary objects using the original AR tag method and the new DSO-based method where an autonomous wheelchair steered toward various hallway corners while humans turned the corner into its path. Both methods achieved the same 70% classification accuracy, say the researchers, indicating the original AR tags were no longer needed.

In a separate test, the researchers implemented ShadowCam in an autonomous car in a parking garage, where the headlights were turned off, mimicking nighttime driving conditions. They compared car-detection times versus LiDAR. In an example scenario, the system detected the car turning around pillars about 0.72 seconds faster than LiDAR. Moreover, say the researchers, because they had tuned ShadowCam specifically to the garage’s lighting conditions, the system achieved a classification accuracy of around 86%.

Looking ahead, the researchers say they are developing the system further to work in different indoor and outdoor lighting conditions. In the future, they say, there could also be ways to speed up the system’s shadow detection and automate the process of annotating targeted areas for shadow sensing.

Related articles:
Algorithm lets cameras see behind corners
Laser imaging technology sees around corners
Honda cars can “see around corners” with V2X-based technology

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles