Voice recognition tech hacked with voice-morphing tool

28 Sep 2015

Voice activation security is no longer an assured means of keeping something secure as a team of researchers has found a means of fooling human and security software with the help of a cheap voice-morphing tool.

With hacking attempts against major organisations a constant threat, many of those being targeted are switching to human fail-safe measures to protect them, most noticeably voice recognition security, which will only allow access to a location or files by a password said by a human voice.

But now, according to the University of Alabama, a team of researchers has revealed that the technology is by no means foolproof and can actually be cracked with a relatively cheap tool and some ingenuity on the part of the hacker.

With a readily available automated speech synthesis tool, all a potential hacker has to do is gather a small number of samples of the person speaking, which, with the addition of voice morphing, can turn the attacker’s voice into that of its victim.

To test its theory, the research team targeted a voice biometrics system that analyses the speaker’s unique vocal patterns to identify them, but with samples of the account holder’s voice was able to gain unfettered access.

‘Possibilities are endless’

The second experiment then tested the voice-morphing tool to imitate the voice of two celebrities – Oprah Winfrey and Morgan Freeman – and found the attacker could quite easily change their voice to that of either of the celebrities.

“For instance, the attacker could post the morphed voice samples on the internet, leave fake voice messages for the victim’s contacts, potentially create fake audio evidence in the court and even impersonate the victim in real-time phone conversations with someone the victim knows,” said Nitesh Saxena, the director of the Security and Privacy In Emerging computing and networking Systems (SPIES) lab, said. “The possibilities are endless.”

During their experimentation, they found that the average rate for rejecting fake voices by voice security software was less than 10pc to 20pc for most victims, while for human testing it was found to make rejections about 50pc of the time.

“Our research showed that voice conversion poses a serious threat, and our attacks can be successful for a majority of cases,” Saxena said.

“Worryingly, the attacks against human-based speaker verification may become more effective in the future because voice conversion/synthesis quality will continue to improve, while it can be safely said that human ability will likely not.”

Voice levels image via Shutterstock

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com