Aug. 3, 2023, 1:10 a.m. | Xinfeng Li, Chen Yan, Xuancun Lu, Zihan Zeng, Xiaoyu Ji, Wenyuan Xu

cs.CR updates on arXiv.org arxiv.org

Automatic speech recognition (ASR) systems have been shown to be vulnerable
to adversarial examples (AEs). Recent success all assumes that users will not
notice or disrupt the attack process despite the existence of music/noise-like
sounds and spontaneous responses from voice assistants. Nonetheless, in
practical user-present scenarios, user awareness may nullify existing attack
attempts that launch unexpected sounds or ASR usage. In this paper, we seek to
bridge the gap in existing research and extend the attack to user-present
scenarios. We …

adversarial aes asr attack automatic disrupt music noise notice process recognition speech speech recognition systems tim voice voice assistants vulnerable

Information System Security Officer (ISSO)

@ LinQuest | Boulder, Colorado, United States

Project Manager - Security Engineering

@ MongoDB | New York City

Security Continuous Improvement Program Manager (m/f/d)

@ METRO/MAKRO | Düsseldorf, Germany

Senior JavaScript Security Engineer, Tools

@ MongoDB | New York City

Principal Platform Security Architect

@ Microsoft | Redmond, Washington, United States

Staff Cyber Security Engineer (Emerging Platforms)

@ NBCUniversal | Englewood Cliffs, NEW JERSEY, United States