State of emergency declared in Florida due to toxic red tide
Why NASA's Parker Solar Probe spacecraft launched to 'touch' Sun won't melt
Bayer shares slump after $289M Monsanto Roundup verdict
NASA Parker Solar Probe rockets toward sun for closest look yet
Are secret Alexa, Siri and Assistant commands hiding within music?
12 May 2018, 06:00 | Justin Tyler
This is my viewing history and I'm not ashamed.
For many people, digital assistants have become a party of daily life-you might have Siri set a reminder for an upcoming appointment on your iPhone, or tell Alexa to order more laundry detergent from Amazon. Researchers have stumbled upon the fact that digital assistants can be manipulated using white noise and commands that the human ear doesn't register.
Hackers might not care about your shopping list, but considering 41.4 percent of smart speakers are in the kitchen, it's important to consider whether they could be used to turn on an oven while you're out, or secretly start up a video call. In doing so, they've been able to make these systems dial phone numbers, open websites, and more. The researchers said they were able to send malicious commands to Amazon's Alexa, Apple's Siri, and Google Assistant that were hidden in recorded music or an innocuous-sounding speech.
The researcher added that he's confident he and his colleagues will eventually be able to attack any smart device.
According to Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley, it is only a matter of time before this is exploited. However, he figures "that the malicious people already employ people to do" what he does. Speech-recognition systems typically translate each sound to a letter, and compile these into words and phrases.
I can remember when speech recognition was so poor it was comical, and now only a decade or so later machines can recognize speech as well, if not better, than humans. Makers of these devices have not ruled out the possibility of attacks such as this happening in future, but Apple, Amazon and Google have responded to the research, noting their respective security risk mitigation strategies in place.
They used a technique called DolphinAttack, which translates voice commands into ultrasonic frequencies that are too high for the human ear to recognize.
Music can be transcribed as arbitrary speech, and human beings cannot hear the targeted attacks play out.
Apple said its smart speaker, HomePod, is created to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures. During the Urabana-Champaign, they showed that though commands couldn't yet penetrate walls, they still had the potential to control smart devices through open windows in buildings.
Carlini went on to note that: "We want to demonstrate that it's possible, and then hope that other people will say, 'Okay this is possible, now let's try and fix it'".