MENU

Microsonar technology allows real-time track finger-spelling

Microsonar technology allows real-time track finger-spelling

News |
By Wisse Hettinga



A Cornell-led research team developed SpellRing, a combination of a speaker, microphone a mini gyroscope and some clever machine learning that makes real-time American Sign Language possible

A Cornell University report:

In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling, which is used in ASL to spell out words without corresponding signs, such as proper nouns, names and technical terms. With further development, the device – believed to be the first of its kind – could revolutionize ASL translation by continuously tracking entire signed words and sentences.

“Many other technologies that recognize fingerspelling in ASL have not been adopted by the deaf and hard-of-hearing community because the hardware is bulky and impractical,” said Hyunchul Lim, a doctoral student in the field of information science. “We sought to develop a single ring to capture all of the subtle and complex finger movement in ASL.”

Developed by Lim and researchers in the Smart Computer Interfaces for Future Interactions (SciFi) Lab, in the Cornell Ann S. Bowers College of Computing and Information Science, SpellRing is worn on the thumb and equipped with a microphone and speaker. Together they send and receive inaudible sound waves that track the wearer’s hand and finger movements, while a mini gyroscope tracks the hand’s motion. These components are housed inside a 3D-printed ring and casing no bigger than a standard U.S. quarter.

A proprietary deep-learning algorithm then processes the sonar images and predicts the ASL fingerspelled letters in real time and with similar accuracy as many existing systems that require more hardware.

SpellRing builds off a previous iteration from the SciFi Lab called Ring-a-Pose and represents the latest in an ongoing line of sonar-equipped smart devices from the lab.

“While large language models are front and center in the news, machine learning is making it possible to sense the world in new and unexpected ways, as this project and others in the lab are demonstrating,” said co-author François Guimbretière, professor of information science (Cornell Bowers CIS).

“Deaf and hard-of-hearing people use more than their hands for ASL. They use facial expressions, upper body movements and head gestures,” said Lim, who completed basic and intermediate ASL courses at Cornell as part of his SpellRing research. “ASL is a very complicated, complex visual language.”

This research was funded by the National Science Foundation.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s