Aug. 5, 2022, 1:20 a.m. | Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond Choo

cs.CR updates on arXiv.org arxiv.org

With the increasing attention to deep neural network (DNN) models, attacks
are also upcoming for such models. For example, an attacker may carefully
construct images in specific ways (also referred to as adversarial examples)
aiming to mislead the DNN models to output incorrect classification results.
Similarly, many efforts are proposed to detect and mitigate adversarial
examples, usually for certain dedicated attacks. In this paper, we propose a
novel digital watermark-based method to generate image adversarial examples to
fool DNN models. …

adversarial digital

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Network Security Engineer

@ Ørsted | Kuala Lumpur, MY

Senior Director of Foundation Relations, Johns Hopkins University & Medicine

@ Johns Hopkins University | Baltimore, MD, United States, 21209

Global Cybersecurity Head

@ CMA CGM | Marseille, FR

Cyber Security Analyst

@ QinetiQ US | Reston, VA, United States