Could hackers trick voice assistants into committing fraud? Researchers say yes.
Voice assistant technology is supposed to make our lives easier, but security experts say it comes with some uniquely invasive risks. Since the beginning of the year, multiple Nest security camera users have reported instances of strangers hacking into and issuing voice commands to Alexa, falsely announcing a North Korean missile attack, and targeting one family by speaking directly to their child, turning up their home thermostat to 90 degrees, and shouting insults. These incidents are alarming, but the potential for silent compromises of voice assistants could be even more damaging.
Nest owner Google — which recently integrated Google Assistant support into Nest control hubs — has blamed weak user passwords and a lack of two-factor authentication for the attacks. But even voice assistants with strong security may be vulnerable to stealthier forms of hacking. Over the past couple of years, researchers at universities in the US, China, and Germany have successfully used hidden audio files to make AI-powered voice assistants like Siri and Alexa follow their commands.
These findings highlight the possibility that hackers and fraudsters could hijack freestanding voice assistant devices as well as voice-command apps on phones to open websites, make purchases, and even turn off alarm systems and unlock doors — all without humans hearing anything amiss. Here’s an overview of how this type of attack works and what the consequences could be.
Speech-recognition AI can process audio humans can’t hear
At the heart of this security issue is the fact that the neural networks powering voice assistants have much better “hearing” than humans do. People can’t identify every single sound in the background noise at a restaurant, for example, but AI systems can. AI speech recognition tools can also process audio frequencies outside the range that people can hear, and many speakers and smartphone microphones pick up those frequencies.