Aug. 9, 2023, 1:10 a.m. | Boquan Li, Jun Sun, Christopher M. Poskitt

cs.CR updates on arXiv.org arxiv.org

Deepfake videos and images are becoming increasingly credible, posing a
significant threat given their potential to facilitate fraud or bypass access
control systems. This has motivated the development of deepfake detection
methods, in which deep learning models are trained to distinguish between real
and synthesized footage. Unfortunately, existing detection models struggle to
generalize to deepfakes from datasets they were not trained on, but little work
has been done to examine why or how this limitation can be addressed. In this …

access access control bypass control control systems deepfake deepfake detection deepfake detectors deepfake videos deep learning detection development footage fraud images study synthesized systems threat videos

Principal Security Engineer

@ Elsevier | Home based-Georgia

Infrastructure Compliance Engineer

@ NVIDIA | US, CA, Santa Clara

Information Systems Security Engineer (ISSE) / Cybersecurity SME

@ Green Cell Consulting | Twentynine Palms, CA, United States

Sales Security Analyst

@ Everbridge | Bengaluru

Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France

@ Sopra Steria | Courbevoie, France

Third Party Cyber Risk Analyst

@ Chubb | Philippines