all InfoSec news
Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Tim. (arXiv:2308.01040v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Automatic speech recognition (ASR) systems have been shown to be vulnerable
to adversarial examples (AEs). Recent success all assumes that users will not
notice or disrupt the attack process despite the existence of music/noise-like
sounds and spontaneous responses from voice assistants. Nonetheless, in
practical user-present scenarios, user awareness may nullify existing attack
attempts that launch unexpected sounds or ASR usage. In this paper, we seek to
bridge the gap in existing research and extend the attack to user-present
scenarios. We …
adversarial aes asr attack automatic disrupt music noise notice process recognition speech speech recognition systems tim voice voice assistants vulnerable