all InfoSec news
Defense Against Adversarial Attacks on Audio DeepFake Detection. (arXiv:2212.14597v1 [cs.SD])
cs.CR updates on arXiv.org arxiv.org
Audio DeepFakes are artificially generated utterances created using deep
learning methods with the main aim to fool the listeners, most of such audio is
highly convincing. Their quality is sufficient to pose a serious threat in
terms of security and privacy, such as the reliability of news or defamation.
To prevent the threats, multiple neural networks-based methods to detect
generated speech have been proposed. In this work, we cover the topic of
adversarial attacks, which decrease the performance of detectors …
adversarial adversarial attacks aim attacks audio audio deepfake deepfake deepfake detection deepfakes deep learning defense detection generated main privacy quality reliability security serious terms threat