Apple’s Siri, Amazon’s Alexa, and Google’s Assistant were meant to be controlled by live human voices, but all three AI assistants are susceptible to hidden commands undetectable to the human ear, researchers in China and the United States have discovered. From a report: The New York Times reports today that the assistants can be controlled using subsonic commands hidden in radio music, YouTube videos, or even white noise played over speakers, a potentially huge security risk for users. According to the report, the assistants can be made to dial phone numbers, launch websites, make purchases, and access smart home accessories — such as door locks — at the same time as human listeners are perceiving anything from completely different spoken text to recordings of music.
In some cases, assistants can be instructed to take pictures or send text messages, receiving commands from up to 25 feet away through a building’s open windows. Researchers at Berkeley said that they can modestly alter audio files “to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.”
Powered by WPeMatico