Translating sign language in the 21st century
When I was in elementary school, I got to take a six-week sign language course. In that short time, we only memorized the alphabet and a few words – but even those, I felt, would allow me to communicate at least a little with members of the deaf community.
Flash forward a few decades. As I read the news, I wonder more and more if programs like the one I participated in will even be offered anymore. Luckily, that’s not necessarily a bad thing – it’s just that, nowadays, technology is being developed that will help hearing people and deaf people effortlessly understand and communicate with each other.
Healthcare and start-up expert Malay Gandhi points out that the way we’ve generally tried to bridge the communication gap between the hearing and the hard of hearing is by “work[ing] on hearing-aid like devices to improve sound transmission rather than focusing on conversation.” But recently, some groups have taken a different approach.
For example Chinese researchers at Microsoft are developing a program that uses the motion-capture technology created for the Kinect. The device would analyze a signer’s movements and translate that into written and spoken speech. Conversely, a hearing person’s speech would be translated into sign language, via an onscreen avatar. It’s a great idea, although one limitation is that it seems like you’d need to use this with a computer screen and other equipment. But Microsoft’s Stewart Tansley suggests that the device could be installed, say, in hospitals or doctor’s offices, where it could allow for more fluid interactions between medical professionals and hard of hearing patients.
Students at Asia University have developed another promising prototype: A set of six motion-detecting rings and a bracelet that would be worn by a deaf person. When they signed, the bracelet would translate their movements into spoken speech, via built-in speakers, and written speech, via an LED screen on the bracelet. In addition, microphones on the bracelet would pick up anything said to the deaf person and translate it into written words on the screen.
A third translation device that’s also still in development is an app called iseewhatyousay. One of the project’s inventors and co-founders, Jibril Jaha, is deaf. A Lyft car driver by day, he uses his interactions with clients to test his app, which involves someone speaking into a smartphone’s microphone. The speech is then translated into text that appears on a small receiving device he wears as a watch. So far, it seems like it’s working well. This article spotlights Jaha’s invention, and also makes reference to another one, a tablet created by MotionSavvy, a group of deaf inventors. The tablet uses a motion detector to translate sign language into speech, and speech into sign language (displayed on the screen).
Of course, if you want to have deep, nuanced conversations with someone, it’s always best to learn their language; no electronic translation tool is likely to completely comprehend every aspect of communication. But at least technology is helping us make even advanced-level fluid communication between deaf and hearing people possible in everyday situations. It will be interesting to see how all of these projects evolve.