May 16, 2024, 4:12 a.m. | Weifei Jin, Yuxin Cao, Junjie Su, Qi Shen, Kai Ye, Derui Wang, Jie Hao, Ziyao Liu

cs.CR updates on

arXiv:2405.09470v1 Announce Type: cross
Abstract: In light of the widespread application of Automatic Speech Recognition (ASR) systems, their security concerns have received much more attention than ever before, primarily due to the susceptibility of Deep Neural Networks. Previous studies have illustrated that surreptitiously crafting adversarial perturbations enables the manipulation of speech recognition systems, resulting in the production of malicious commands. These attack methods mostly require adding noise perturbations under $\ell_p$ norm constraints, inevitably leaving behind artifacts of manual modifications. Recent …

adversarial application arxiv asr attention audio automatic cs.lg networks neural networks recognition robustness security security concerns speech speech recognition studies systems transfer

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Security Operations Analyst, Senior

@ Booz Allen Hamilton | USA, WV, Clarksburg (1000 Custer Hollow Rd)

Security Solution Consultant

@ Genesys | Durham (Flexible)