
Laser imaging technology sees around corners
The researchers are focused on autonomous vehicle applications, with the goal of using extremely sensitive lasers that can reflect off nearby objects to see around corners and produce images of objects hidden from view. While some autonomous vehicles are already equipped with lasers for detecting objects around the car, the researchers see other opportunities such as being able to see through foliage from aerial vehicles or giving rescue teams the ability to find people blocked from view by walls and rubble.
“It sounds like magic,” says Gordon Wetzstein, assistant professor of electrical engineering and senior author of a paper describing the work, “but the idea of non-line-of-sight imaging is actually feasible.”
The researchers are not the first to develop a method for bouncing lasers around corners to capture images of objects. However, they say, their research advances previous efforts by using an extremely efficient and effective algorithm they developed for processing the final image.
“A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3-D structure of the hidden object from the noisy measurements,” says David Lindell, graduate student in the Stanford Computational Imaging Lab and co-author of the paper. “I think the big impact of this method is how computationally efficient it is.”
To create their system, the researchers used a laser set next to a highly sensitive photon detector, which can record even a single particle of light. They then shot pulses of laser light at a wall that then – invisible to the human eye – bounce off objects around the corner and bounce back to the wall and to the detector.
Once the scan – which can take from two minutes to an hour, depending on conditions – is finished, the algorithm untangles the paths of the captured photons to construct an image. Currently the algorithm can achieve this in less than a second, and is so efficient it can run on a regular laptop. Based on how well the algorithm currently works, say the researchers, they believe they can speed it up so that it is nearly instantaneous once the scan is complete.
Going forward, the researchers plan on improving the system to better handle the variability of the real world and complete the scan more quickly. For example, they say, the distance to the object and amount of ambient light can make it difficult for their technology to see the light particles it needs to resolve out-of-sight objects. Their technique also depends on analyzing scattered light particles that are intentionally ignored by the LiDAR guidance systems currently in cars.
“We believe the computation algorithm is already ready for LiDAR systems,” says Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. “The key question is if the current hardware of LiDAR systems supports this type of imaging.”
According to the researchers, the system will also need to work better in daylight and with objects in motion before it could be considered “road ready.” It has been tested successfully outside, they say, but only with indirect light. Their technology did perform particularly well picking out retroreflective objects, such as safety apparel or traffic signs.
If the technology were placed on a car today, say the researchers, that car could easily detect things like road signs, safety vests or road markers, although it might struggle with a person wearing non-reflective clothing. For more, see “Confocal non-line-of-sight imaging based on the light-cone transform.”
Related articles:
Algorithm lets cameras see behind corners
Stronger THz wave promises safer detection of hidden objects, materials
3D through-wall imaging achieved with drones and Wi-Fi
Velodyne releases ‘breakthrough’ LiDAR sensor
Agile sensor technology may surpass lidar
