The Stack Archive

Silent ear and tongue-tracking tech can control wearables

Wed 18 Nov 2015

Sound waves and mouth

Scientists at Georgia Tech are developing silent speech systems that can enable fast and hands-free communication with wearable devices, controlled by the user’s tongue and ears.

As seen this week with open source project Eyedrivomatic, the researchers want to apply the technology to provide a device control solution for people who are disabled. They suggest it could also be used by those working in a loud environment in need of a quiet way to communicate with their wearable devices.

The new technology is partially built using a magnetic tongue control system – previously used to help people with paralysis control a wheelchair through tongue movements. However, the team was dissuaded from implementing complete tongue control technology which can be quite invasive, requiring a magnetic tongue piercing or implanted sensors.

Technical lead at Google Glass and Georgia Institute of Technology, Thad Starner, revealed that he was inspired to investigate ear movements following a trip to the dentist. Starner felt the space in ears move as the dentist tested his jaw function – “I said, well, that’s cool. I wonder if we can do silent speech recognition with that?”

The prototype now involves a combination of tongue control with earphone-like pieces. Each ear device is installed with proximity sensors, which use infrared lights to map the changing shape of the ear canal. Starner explained that every word manipulates the canal in a different way, allowing for accurate recognition.

During testing, the team trained the system to recognise 12 useful phrases which it was able to recall 90% of the time with the tongue and ear trackers. Using the ear trackers alone this accuracy rate was slightly lower.

In the next stages of development, the researchers hope to build up a phrasebook of common words and sentences which can be recognised by the ear system alone. “We’re trying to figure out the fundamental parts of speech we can recognise. We call them ‘jaw-emes’,” said Georgia Tech graduate student Abdelkareem Bedri.

Other research, which has reached 96% accuracy, has looked into modifying the ear tracking technology to read simple jaw movements, from left to right for example.

Bruce Denby, a silent speech researcher at the Pierre and Marie Curie University, Paris, said that proving that the technology is industry ready will be critical in bringing the product to market.

“The true holy grail of silent speech is continuous speech recognition,” said Denby, who added that the ability to recognise even a small selection of words is an incredible benefit for disabled individuals.


news research
Send us a correction about this article Send us a news tip