How can you make your smart speaker more secure?

17 Oct 2018681 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: © rcfotostock/Stock.adobe.com

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Smart speakers are becoming ubiquitous, but security concerns around smart home technologies show no sign of abating.

A growing number of households now have a smart speaker of some kind and, while they are undoubtedly creators of convenience, security researchers have shown they can sometimes be exploited.

While it is unrealistic and extreme to suggest not using these handy gadgets at all, there are some things you can do to make them as secure as possible. Siliconrepublic.com spoke to Sean McGrath, cybersecurity advocate at BestVPN.com, about the advent of these smart devices.

‘Smart speakers are always listening. This little nugget of fear lies at the epicentre of all our concerns surrounding voice assistant technology’
– SEAN MCGRATH

What are the biggest vulnerabilities present in smart speakers? Are the claims around the security risks overblown or grounded in real experience and research?

We’ve all heard the horror stories surrounding voice assistant technology. While these anecdotes serve as important reminders that we need to be mindful about the wider implications of smart home technologies, they don’t address the most significant privacy concern.

Smart speakers are always listening. This little nugget of fear lies at the epicentre of all our concerns surrounding voice assistant technology. It should be noted that data is only sent to back-end servers after the wake-up word has been heard, and that data is then sent over an encrypted connection. But the fact remains: voice assistants are always on, always listening, and therefore will always represent a significant attack surface in the home.

At present, the majority of viable threats to voice assistants are not focused on hacking into the device or eavesdropping on conversations, but rather on exploiting the built-in functionality of the technology. Researchers have successfully demonstrated that simple homophones – words that sound the same but have different meanings – can be used to activate malicious activities on Amazon’s Echo devices.

These fake skills can then either attempt to extract further personal details from the user (such as bank details) or simply start listening in.

What should people do if they want to use a smart speaker safely?

We all have to decide on our own personal threat model. In other words, we have to assess the potential dangers of any given technology and weigh that threat up against the convenience that it brings us. A record player, a stopwatch, a pen, a notepad, a thermometer and a newspaper will give you all of the functionality of a voice assistant and will eliminate any chance of you being hacked. But it’s not practical – or fun.

The reality is, most of us place a greater value on convenience than we do on privacy these days. We use mobile phones with weak biometric technology, computers with weak passwords and a growing number of IoT devices with little or no security measures in place at all.

There are some general guidelines you can follow to make sure your voice assistant devices are as secure as possible.

  • Mute your assistant when not in use. This often negates the whole idea behind a voice-activated assistant, because you have to unmute it the next time you want to use it.
  • Use a WPA2-encrypted Wi-Fi network.
  • Protect your account with two-factor authentication.
  • Turn off purchasing.
  • Create a new account and do not allow the device to access your calendar or address book.
  • Disable any services or skills that you do not use.
How do you minimise the risk of eavesdropping?

The real risk of eavesdropping comes from the device manufacturers. Amazon is currently looking to patent a ‘sniffer algorithm’ that can learn your likes and dislikes by listening in on your conversations and then use that data to target adverts at you. This is a plausible scenario so it’s worth keeping on top of updates to terms and conditions, and switching off features you are not happy with as and when they come along.

Edward Snowden’s NSA revelations also demonstrated that government agencies will pressure tech companies and do anything to exploit these always-on listening devices. The potential is simply too great for the intelligence services to ignore.

As the technology evolves, do you think bad actors will evolve along with it?

Without a doubt. Since the dawn of digital technology, there has always been a fight between the good guys and the bad guys. And, as depressing as it sounds, the bad guys are usually one step ahead. After all, you can only patch a vulnerability once you know it exists.

Malicious actors have the luxury of time, and we, as a society, are providing a huge attack surface, thanks to our insatiable demand for cutting-edge gadgets. Voice assistant technology is still relatively new and there are undoubtedly vulnerabilities yet to be discovered. As new features come along, the cat-and-mouse game will continue.

Do you think people need to be more aware of their privacy in general and the value of their personal conversations?

Absolutely. We’ve lost touch with what privacy means. When our personal information is ‘out there’ in the digital realm, we seem to forget its true value. The vast majority of these technologies are based on a business model that focuses on personal data collection.

We have become numb to this reality because these services make our lives easier. We all need to do some soul-searching and assess what value we place on our right to privacy. Then, we can make an informed decision about whether these technologies are really worth the trade-off.

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com