Voice assistants fooled by ‘dolphin’ ultrasound messages, according to new research

8 Sep 20176 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Bottlenose dolphin. Dolphins use ultrasound in a process called ‘echolocation’. Image: Kebrun/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Voice-controlled assistants are well on their way to ubiquity, but this latest security flaw is raising concerns.

Siri, Alexa and Google Assistant are becoming part and parcel of many people’s daily lives, with many reporting that products such as these make their routines more convenient and streamlined.

While this may be true, and the world certainly is turning towards the concept of a smart home, there are some worries that remain.

Hearing voices

According to MIT Technology Review, researchers at Zhejiang University in China have shown that commands encoded in high-frequency sounds can still be recognised by voice assistants.

“They take a regular human voice and use it to modulate an ultrasound signal, much like the way music can be encoded onto radio waves.

“Turns out, the mic on devices like an iPhone or Amazon Echo speaker can still detect the sound, and their signal-processing software also picks up the voice signals encoded on the wave.”

The researchers have been able to activate Siri to begin a FaceTime call, switch a phone to flight mode using Google Now and even control an Audi vehicle’s navigation system. The team describes the potential threat as a ‘dolphin attack’, so-called because of the underwater mammal’s use of ultrasound to detect other animals, food and dangers.

According to the BBC, the researchers were able to activate the assistants from several feet away using the ultrasound waves. They suggested that an attacker could embed hidden ultrasonic messages in online video content, or broadcast them in public when they were near a potential target.

Not an immediate threat

Dr Steven Murdoch, a cybersecurity researcher at University of London, explained to the BBC that although the hack is indeed possible, it’s not necessarily realistic at this point in the development of voice assistants: “I would expect the smart speaker vendors will be able to do something about it and ignore the higher frequencies.”

Spokespeople from Google and Amazon have said they are reviewing the findings of the Chinese researchers.

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com