Dec. 15, 2023, 2:24 a.m. | Xiangtao Meng, Li Wang, Shanqing Guo, Lei Ju, Qingchuan Zhao

cs.CR updates on arXiv.org arxiv.org

While DeepFake applications are becoming popular in recent years, their
abuses pose a serious privacy threat. Unfortunately, most related detection
algorithms to mitigate the abuse issues are inherently vulnerable to
adversarial attacks because they are built atop DNN-based classification
models, and the literature has demonstrated that they could be bypassed by
introducing pixel-level perturbations. Though corresponding mitigation has been
proposed, we have identified a new attribute-variation-based adversarial attack
(AVA) that perturbs the latent space via a combination of Gaussian prior …

abuse adversarial adversarial attacks algorithms applications attack attacks bypassing classification deepfake deepfake detection detection literature popular privacy serious threat vulnerable

IT Security Manager

@ Timocom GmbH | Erkrath, Germany

Cybersecurity Service Engineer

@ Motorola Solutions | Singapore, Singapore

Sr Cybersecurity Vulnerability Specialist

@ Health Care Service Corporation | Chicago Illinois HQ (300 E. Randolph Street)

Associate, Info Security (SOC) analyst

@ Evolent | Pune

Public Cloud Development Security and Operations (DevSecOps) Manager

@ Danske Bank | Copenhagen K, Denmark

Cybersecurity Risk Analyst IV

@ Computer Task Group, Inc | United States