AI-equipped eyeglasses can read silent speech
Ruidong Zang, a doctoral student in the field of information science from the Cornell University is silently mouthing the passcode to unlock his nearby smartphone and play the next song in his playlist. It’s not telepathy: It’s the seemingly ordinary, off-the-shelf eyeglasses he’s wearing, called EchoSpeech – a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.
Developed by Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab, the low-power, wearable interface requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone, researchers said.
“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development … read about the full development: Cornell SciFi lab