Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
Presented at the 1st Deep Learning and Security Workshop May 24, 2018
at the 2018 IEEE Symposium on Security & Privacy
San Francisco, CA
http://www.ieee-security.org/TC/SP2018/
https://www.ieee-security.org/TC/SPW2018/DLS/
ABSTRACT
We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla’s implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.