MENU

Ultrasound eye gaze and expression detection in smart glasses

Ultrasound eye gaze and expression detection in smart glasses

Technology News |
By Nick Flaherty



Researchers in the US have developed two technologies that track a person’s gaze and facial expressions through ultrasound sensing.  

The technology developed at Cornell University is small enough to fit on commercial smart glasses or virtual reality (VR) or augmented reality (AR) headsets, yet consumes significantly less power than similar tools using cameras. 

Both use speakers and microphones mounted on an eyeglass frame to bounce ultrasound off the face and pick up reflected signals caused by face and eye movements. One device, GazeTrak, is the first eye-tracking system that relies on acoustic signals. The second, EyeEcho, is the first system to continuously and accurately detect facial expressions and recreate them through an avatar in real time.

The devices can last for several hours on a smart glasses battery and more than a day on a VR headset. 

The technologies could also be used to help diagnose or monitor neurodegenerative diseases, like Alzheimer’s and Parkinsons. With these conditions, patients often have abnormal eye movements and less expressive faces, and this type of technology could track the progression of the disease from a patient’s home. 

“It’s small, it’s cheap and super low-powered, so you can wear it on smartglasses everyday – it won’t kill your battery,” said Cheng Zhang, director of the  Smart Computer Interfaces for Future Interactions (SciFi) Lab that created the new devices. 

“In a VR environment, you want to recreate detailed facial expressions and gaze movements so that you can have better interactions with other users,” said researcher Ke Li (above).

Using ultrasound instead of video also presents fewer privacy concerns, said Li. “There are many camera-based systems in this area of research or even on commercial products to track facial expressions or gaze movements, like Vision Pro or Oculus,” he said. “But not everyone wants cameras on wearables to capture you and your surroundings all the time.” 

“The privacy concerns associated with systems that use video will become more and more important as VR/AR headsets become much smaller and, ultimately, similar to today’s smart glasses,” said François Guimbretière, professor of information science at Cornell. “Because both technologies are so small and power-efficient, they will be a perfect match for lightweight, smart AR glasses.” 

For GazeTrak, researchers positioned one speaker and four microphones around the inside of each eye frame of a pair of glasses, to bounce and pick up soundwaves from the eyeball and the area around the eyes. The resulting sound signals are fed into a customized deep learning pipeline that uses artificial intelligence to continuously infer the direction of the person’s gaze. 

GazeTrak does not yet work as well as the leading eye-tracking technology, which relies on cameras, but the new device is proof of concept that audio signals are also effective. The researchers think they can reach that same accuracy and reduce the number of speakers and microphones required, given further optimization. 

For EyeEcho, one speaker and one microphone is located next to the glasses’ hinges, pointing down to catch skin movement as facial expressions change. The reflected signals are also interpreted using AI. 

This allows users to have hands-free video calls through an avatar, even in a noisy café or on the street. While some smartglasses have the ability to recognize faces or distinguish between a few specific expressions, currently, none track expressions continuously like EyeEcho. 

GazeTrak could also be used with screen readers to read out portions of text for people with low vision as they visit a website.

www.cornell.edu

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s