Nano-optic imager is the size of a grain of salt
Researchers at Princeton University and the University of Washington have combined a complex metasurface with 1.6m elements and computational processing to create an nano-optic imager, or lens, that is the size of a grain of salt.
Such a sensor could be used for minimally invasive endoscopy with medical robots to diagnose and treat diseases, and improve imaging for other robots with size and weight constraints. Arrays of thousands of such cameras could be used for full-scene sensing, turning surfaces into cameras.
Instead of using curved glass or plastic lenses to focus the light, the 0.5mmm wide metasurface uses 1.6 million cylindrical posts built in a silicon nitride (SiN) process for an imager that is 500 μm thick. The researchers reported the development in a paper published in Nature Communications this week.
Each post has a unique geometry, and functions like an optical antenna. Varying the design of each post is necessary to correctly shape the entire optical wavefront. Using machine learning-based algorithms written in TensorFlow, the posts’ interactions with light combine to produce the high quality images with a field of view of 40° with an f-number of 2 for a full-colour metasurface camera.
A key element is the integrated design of the optical surface and the machine learning signal processing algorithms that produce the image. This boosted the camera’s performance in natural light conditions, in contrast to previous metasurface cameras that required laser light to produce high-quality images, says Felix Heide, the study’s senior author and an assistant professor of computer science at Princeton.
Other ultracompact metasurface lenses have suffered from major image distortions, small fields of view, and limited ability to capture the full spectrum of visible light. However the development of these tiny imaging systems is of great interest for VR and AR glasses. The work was supported in part by the National Science Foundation, the U.S. Department of Defense, the UW Reality Lab, Facebook, Google, Futurewei Technologies and Amazon.
Related articles
- Metasurface enables low cost AR glasses
- Metasurface lens is 1000 times thinner and lighter
- SiN photonics and metasurfaces to autotune AR glasses
- Metasurfaces perform all-optical image edge detection
“It’s been a challenge to design and configure these little nano-structures to do what you want,” said Ethan Tseng, a student at Princeton who co-led the study. “For this specific task of capturing large field of view RGB images, it was previously unclear how to co-design the millions of nano-structures together with post-processing algorithms.”
A computational simulator was created to automate testing of different nano-antenna configurations. Because of the number of antennas and the complexity of their interactions with light, this type of simulation can use massive amounts of memory and time, says researcher Shane Colburn. He developed a model to efficiently approximate the metasurfaces’ image production capabilities with sufficient accuracy.
The imager feeds light into a fibre optic cable to a sensor, and the full computational reconstruction pipeline runs at real-time rates and requires only 58 ms to process a 720 px × 720 px RGB image.
Colburn now directs system design at Tunoptix, a Seattle-based company that is commercializing metasurface imaging technologies that was cofounded by Colburn’s graduate adviser Arka Majumdar.
Heide and his colleagues are now working to add more computational abilities to the imager itself. Beyond optimizing image quality, they would like to add capabilities for object detection and other sensing modalities relevant for medicine and robotics. Heide also envisions using ultracompact imagers to create surfaces as sensors.
“We could turn individual surfaces into cameras that have ultra-high resolution, so you wouldn’t need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera. We can think of completely different ways to build devices in the future,” he said.
www.nature.com/articles/s41467-021-26443-0; www.princeton.edu
Related articles
- Smart metasurfaces promise to double 6G wireless network capacity
- Metasurface-driven OLED displays promise 10,000+ PPI
- Plasmonic metasurface advances free-space optical imaging
- Researchers shrink hyperspectral imaging with metasurface lens
Other articles on eeNews Europe
- Finland’s first 5 qubit quantum computer is up and running
- Picocom samples its RISC-V OpenRAN chip
- Top articles in November on eeNews Europe
- Honeywell backs Quantinuum with $300m
- US solid state battery deal for European car makers
- Raspberry Pi prepares for IPO