April 17, 2023, 1:12 a.m. | Dingcheng Yang, Wenjian Yu, Zihao Xiao, Jiaqi Luo

cs.CR updates on arXiv.org arxiv.org

Deep neural networks (DNNs) have been shown to be vulnerable to adversarial
examples. Moreover, the transferability of the adversarial examples has
received broad attention in recent years, which means that adversarial examples
crafted by a surrogate model can also attack unknown models. This phenomenon
gave birth to the transfer-based adversarial attacks, which aim to improve the
transferability of the generated adversarial examples. In this paper, we
propose to improve the transferability of adversarial examples in the
transfer-based attack via masking …

adversarial aim attack attacks attention generated key masking networks neural networks the key vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Junior Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Associate Director, Operations Compliance and Investigations Management

@ Legend Biotech | Raritan, New Jersey, United States

Analyst, Cyber Operations Engineer

@ BlackRock | SN6-Singapore - 20 Anson Road

Working Student/Intern/Thesis: Hardware based Cybersecurity Training (m/f/d)

@ AVL | Regensburg, DE