MENU

Artificial intelligence that uses light instead of electricity

Artificial intelligence that uses light instead of electricity

News |
By Wisse Hettinga



‘Suddenly their lens was not just benefiting from artificial intelligence, it was performing functions a lot like what happens in the most sophisticated AI systems that recognize images’

Princeton Engineering report:

At first, Heide, an assistant professor of computer science at Princeton University, who joined the faculty in 2020, focused on using machine learning to more effectively draw information from light captured by cameras. He succeeded in creating cameras that use visible light or radar signals to detect objects around blind corners and see through fog — important goals for aiding drivers or ensuring the safety of autonomous vehicles. 

Soon, however, Heide realized that making more progress would require rethinking what a camera is, what a lens is. 

With his first graduate student, Ethan Tseng, Heide began to explore metasurfaces — materials whose geometry gives them special properties. Instead of bending light inside a piece of glass or plastic, like a conventional lens, a metasurface diffracts light around tiny objects, like light spreading out as it passes through a slit. To build metasurfaces, Heide and his team struck up a collaboration with the lab of Arka Majumdar at the University of Washington, who are experts in ultra-small devices that control the interaction of light and matter.

“This is a completely new way of thinking about optics, which is very different from traditional optics,” Majumdar said in a news story published by the University of Washington.

In their first breakthrough, the combined research team built a high-resolution camera smaller than a large grain of salt. First reported in 2021 and now widely cited in scientific journals and by news media, their tiny camera depended on the exact positioning of millions of tiny pillars on a metasurface. The team used machine learning to arrange the pillars — each about a 1000th of a millimeter tall and much narrower in width — to make the maximum use of the light hitting the device. 

Then the researchers realized something profound. The light coming out of these complex arrays of pillars did not need to look to a human eye at all like the object that produced the image. The pillars could function as highly specialized filters that organized the optical information into categories, such as edges, light areas, dark areas, or even qualities that a human viewer might not be able to perceive or make sense of, but that could be useful to a computer that receives this pre-processed information.

Suddenly their lens was not just benefiting from artificial intelligence, it was performing functions a lot like what happens in the most sophisticated AI systems that recognize images. Could the lens itself tell the difference between a dog and a horse?

“We realized we don’t need to record a perfect image,” Heide said. “We can record only certain features that we can then aggregate to perform tasks such as classification.”

More information here

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s