Spider-inspired depth sensor fuses optical, imaging breakthroughs

Spider-inspired depth sensor fuses optical, imaging breakthroughs

Technology News |
By Rich Pell

The inspiration behind the design of the sensor was the efficient depth perception system that has evolved in jumping spiders – a family of spiders with two sets of eyes that can accurately pounce on unsuspecting targets from several body lengths away. The resulting sensor, which combines a multifunctional flat metalens with an ultra-efficient algorithm to measure depth in a single shot, could be used in microrobotics, augmented reality, and wearable devices, say the researchers.

“Evolution has produced a wide variety of optical configurations and vision systems that are tailored to different purposes,” says Zhujun Shi, a Ph.D. candidate in the Graduate School of Arts and Sciences (GSAS) in the Department of Physics and co-first author of a paper on the sensor. “Optical design and nanotechnology are finally allowing us to explore artificial depth sensors and other vision systems that are similarly diverse and effective.”

Current depth sensors use integrated light sources and multiple cameras to measure distance, while humans measure depth using stereo vision – i.e., when looking at an object each eye collects a slightly different image – and calculate distance based on the difference between the two images. That calculation, say the researchers, is computationally burdensome – an ability that human brains can handle but not those of jumping spiders, which have instead had to evolve a more efficient system.

Jumping spiders have two sets of eyes: two large principal eyes and two small lateral eyes (image). The lateral eyes are used to sense the motion of an object, such as a fly, which the spider then zeros in on using its principal eyes

Each principal eye has a few semi-transparent retinae arranged in layers, and these retinae measure multiple images with different amounts of blur. For example, if a jumping spider looks at a fruit fly with one of its principal eyes, the fly will appear sharper in one retina’s image and blurrier in another. This change in blur encodes information about the distance to the fly – a type of distance calculation known as “depth from defocus” in computer vision technology.

Up until now, say the researchers, replicating this natural ability has required large cameras with motorized internal components that can capture differently-focused images over time, which limits the speed and practical applications of the sensor. So the researchers turned to the use of a metalens – a flat surface made up of nanostructures to focus light – that can simultaneously produce several images containing different information and designed a metalens that can simultaneously produce two images with different blur.

“Instead of using layered retina to capture multiple simultaneous images as jumping spiders do,” says Shi, “the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor.”

An ultra-efficient algorithm then interprets the two images and builds a depth map to represent object distance.

“Being able to design metasurfaces and computational algorithms together is very exciting,” says Qi Guo, a GSAS Ph.D. candidate and co-first author of the paper. “This is a new way of creating computational sensors, and it opens the door to many possibilities.”

Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and co-senior author of the paper adds, “Metalenses are a game changing technology because of their ability to implement existing and new optical functions much more efficiently, faster and with much less bulk and complexity than existing lenses. Fusing breakthroughs in optical design and computational imaging has led us to this new depth camera that will open up a broad range of opportunities in science and technology.”

For more, see “Compact single-shot metalens depth sensors inspired by eyes of jumping spiders.”

Related articles:
‘Time-folded’ optics for ultrafast cameras open new imaging possibilities
Artificial eye can simultaneously control focus, astigmatism, and image shift
Broadband metalens opens new possibilities in virtual, augmented reality
Light-bending metasurfaces open new opportunities in advanced imaging, display
Qualcomm, Himax team on 3D depth sensing solution

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles