April 27, 2022, 1:20 a.m. | Junhao Dong, Yuan Wang, Jianhuang Lai, Xiaohua Xie

cs.CR updates on arXiv.org arxiv.org

DeepFake face swapping presents a significant threat to online security and
social media, which can replace the source face in an arbitrary photo/video
with the target face of an entirely different person. In order to prevent this
fraud, some researchers have begun to study the adversarial methods against
DeepFake or face manipulation. However, existing works focus on the white-box
setting or the black-box setting driven by abundant queries, which severely
limits the practical application of these methods. To tackle this …

adversarial attack box deepfake restricted

Information Security Engineers

@ D. E. Shaw Research | New York City

Senior Cybersecurity Technical Delivery Manager

@ MUFG | London Ropemaker place

Junior consultant-Technology Risk

@ EY | Bratislava, SK, 811 02

Director of Security Engineering, Information Security

@ Illumio | Sunnyvale, California

Cyber Analyst II 03396 NWG

@ North Wind Group | KNOXVILLE, TN

CRIT Information Security Officer (f/m/d)

@ Deutsche Börse | Frankfurt am Main, DE