April 25, 2024, 7:11 p.m. | Haolin Wu, Jing Chen, Ruiying Du, Cong Wu, Kun He, Xingcan Shang, Hao Ren, Guowen Xu

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.15854v1 Announce Type: new
Abstract: The increasing prevalence of audio deepfakes poses significant security threats, necessitating robust detection methods. While existing detection systems exhibit promise, their robustness against malicious audio manipulations remains underexplored. To bridge the gap, we undertake the first comprehensive study of the susceptibility of the most widely adopted audio deepfake detectors to manipulation attacks. Surprisingly, even manipulations like volume control can significantly bypass detection without affecting human perception. To address this, we propose CLAD (Contrastive Learning-based Audio …

arxiv attacks audio audio deepfake cs.cr cs.lg cs.sd deepfake deepfake detection detection eess.as manipulation

Information System Security Officer / Auditor

@ Peraton | Washington, DC, United States

Senior Cloud Security Engineer

@ Alludo | US | Boston, MA, US | San Francisco, CA, US | Austin, TX, US

Tier 3 - Malware Analyst, SME

@ Resource Management Concepts, Inc. | Quantico, Virginia, United States

Temp to Hire Senior DevSecOps Engineer

@ Scientific Systems Company, Inc. | Burlington, Massachusetts, United States

Security Engineer III - Splunk | SIEM

@ JPMorgan Chase & Co. | Plano, TX, United States

Information Systems Security Officer / Auditor

@ Peraton | Washington, DC, United States