Dec. 15, 2023, 2:24 a.m. | Xiangtao Meng, Li Wang, Shanqing Guo, Lei Ju, Qingchuan Zhao

cs.CR updates on arXiv.org arxiv.org

While DeepFake applications are becoming popular in recent years, their
abuses pose a serious privacy threat. Unfortunately, most related detection
algorithms to mitigate the abuse issues are inherently vulnerable to
adversarial attacks because they are built atop DNN-based classification
models, and the literature has demonstrated that they could be bypassed by
introducing pixel-level perturbations. Though corresponding mitigation has been
proposed, we have identified a new attribute-variation-based adversarial attack
(AVA) that perturbs the latent space via a combination of Gaussian prior …

abuse adversarial adversarial attacks algorithms applications attack attacks bypassing classification deepfake deepfake detection detection literature popular privacy serious threat vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC