June 17, 2022, 1:20 a.m. | Dingcheng Yang, Zihao Xiao, Wenjian Yu

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) for image classification are known to be
vulnerable to adversarial examples. And, the adversarial examples have
transferability, which means an adversarial example for a DNN model can fool
another black-box model with a non-trivial probability. This gave birth of the
transfer-based adversarial attack where the adversarial examples generated by a
pretrained or known model (called surrogate model) are used to conduct
black-box attack. There are some work on how to generate the adversarial
examples from a …

adversarial dark lg

Junior Cybersecurity Analyst - 3346195

@ TCG | 725 17th St NW, Washington, DC, USA

Cyber Intelligence, Senior Advisor

@ Peraton | Chantilly, VA, United States

Consultant Cybersécurité H/F - Innovative Tech

@ Devoteam | Marseille, France

Manager, Internal Audit (GIA Cyber)

@ Standard Bank Group | Johannesburg, South Africa

Staff DevSecOps Engineer

@ Raft | San Antonio, TX (Local Remote)

Domain Leader Cybersecurity

@ Alstom | Bengaluru, KA, IN