April 29, 2022, 1:20 a.m. | Zhi Wang, Yiwen Guo, Wangmeng Zuo

cs.CR updates on arXiv.org arxiv.org

With the progress in AI-based facial forgery (i.e., deepfake), people are
increasingly concerned about its abuse. Albeit effort has been made for
training classification (also known as deepfake detection) models to recognize
such forgeries, existing models suffer from poor generalization to unseen
forgery technologies and high sensitivity to changes in image/video quality. In
this paper, we advocate adversarial training for improving the generalization
ability to both unseen facial forgeries and unseen image/video qualities. We
believe training with samples that are …

adversarial deepfake forensics game

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Regional Leader, Cyber Crisis Communications

@ Google | United Kingdom

Regional Intelligence Manager, Compliance, Safety and Risk Management

@ Google | London, UK

Senior Analyst, Endpoint Security

@ Scotiabank | Toronto, ON, CA, M1K5L1

Software Engineer, Security/Privacy, Google Cloud

@ Google | Bengaluru, Karnataka, India

Senior Security Engineer

@ Coinbase | Remote - USA