Deaf people and non-deaf people should be able to communicate seamlessly, which is what a new device from Lero aims to achieve.
With inclusion and diversity now placed firmly in the spotlight, one of the biggest developments in recent years has been the effort to enable more communication and cooperation between deaf and non-deaf people.
To take one recent example, members of the astronomical community have put a considerable amount of work into creating a multilingual dictionary of sign language for deaf astronomers.
However, there still exists the problem that the vast majority of able-bodied people do not know sign language, resulting in a need for an interpreter in many cases.
Now, a team of researchers from University College Dublin (UCD) and the Science Foundation Ireland-funded research centre Lero has revealed a prototype device that could make communication between deaf and non-deaf people a lot faster.
The prototype device itself is based on the commercially available HoloLens, an augmented-reality headset developed by Microsoft, which partnered with Lero on the project.
It works using the company’s popular communications tool, Skype, as well as other tools such as LUIS.ai language understanding, Azure cognitive services and Xbox depth camera technologies.
The next steps
When a non-deaf person wears the headset, an avatar will appear on screen translating the person’s sign language into speech, but it can also be used by deaf people to translate voice into Irish sign language.
So far, tests conducted with deaf people have shown promising results, with Philip Power, a deaf law student at UCD, saying: “My reaction when I first used the prototype was ‘Wow, absolutely fantastic!’ I think it’s going to be a great benefit for myself, for students and just for everyday life as well. It’s a really, really good idea.”
Leading the Lero and UCD programme is Dr Anthony Ventresque, who is also director of the UCD Complex Software Lab.
He said: “The next step is to make the prototype realistic and useful to the end user by integrating facial expression recognition and generation into the interpreter, which is a critical feature missing from current offerings in this area.
“We feel we also have a head start in this space as we’ve developed an innovative solution to emotion detection in the context of another joint Lero-Skype project.”
Updated, 5.21pm, 15 August 2018: This article was updated to amend a quote attributed to Philip Power.