The new so-called “time-folded” optics, say the researchers, can capture a scene at multiple depths with one shutter click, without requiring a zoom lens. The technique opens up the possibility for new capabilities for ultrafast time- or depth-sensitive cameras that are not possible with conventional photography optics.
The new optics were designed for an ultrafast sensor called a streak camera, which resolves images from ultrashort pulses of light. Streak and other ultrafast cameras have been used to make a trillion-frame-per-second video, scan through closed books, and provide a depth map of a 3-D scene, among other applications.
Previously such cameras have relied on conventional optics, which require that a lens with a given focal length – measured in millimeters or centimeters – has to sit at a distance from an imaging sensor equal to or greater than that focal length to capture an image. As a result, this means the lenses must be very long.
The MIT researchers’ technique instead makes a light signal reflect back and forth off carefully positioned mirrors inside the lens system where a fast imaging sensor captures a separate image at each reflection time. The result is a sequence of images — each corresponding to a different point in time, and to a different distance from the lens. Each image can be accessed at its specific time – thus the coined name “time-folded optics.”
“When you have a fast sensor camera, to resolve light passing through optics, you can trade time for space,” says Barmak Heshmat, first author on a paper on the research. “That’s the core concept of time folding … You look at the optic at the right time, and that time is equal to looking at it in the right distance. You can then arrange optics in new ways that have capabilities that were not possible before.”
The new optics architecture includes a set of semireflective parallel mirrors that reduce, or “fold,” the focal length every time the light reflects between the mirrors. By placing the set of mirrors between the lens and sensor, say the researchers, they condensed the distance of optics arrangement by an order of magnitude while still capturing an image of the scene.
In their study, the researchers demonstrate three uses for time-folded optics for ultrafast cameras and other depth-sensitive imaging devices. These “time-of-flight” cameras measure the time that it takes for a pulse of light to reflect off a scene and return to a sensor, to estimate the depth of the 3D scene.
Ultimately, say the researchers, their study opens doors for many different optics designs by tweaking the cavity spacing, or by using different types of cavities, sensors, and lenses.
“The core message is that when you have a camera that is fast, or has a depth sensor, you don’t need to design optics the way you did for old cameras,” says Heshmat. “You can do much more with the optics by looking at them at the right time.”
For more, see “Photography optics in the time dimension.”
Broadband metalens opens new possibilities in virtual, augmented reality
Light-bending metasurfaces open new opportunities in advanced imaging, display
Qualcomm, Himax team on 3D depth sensing solution
iPhone7 Plus teardown hints at possible Apple 3D camera