May 8, 2023, 1:10 a.m. | Yulong Wang, Tianxiang Li, Shenghong Li, Xin Yuan, Wei Ni

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, while
adversarial attack models, e.g., DeepFool, are on the rise and outrunning
adversarial example detection techniques. This paper presents a new adversarial
example detector that outperforms state-of-the-art detectors in identifying the
latest adversarial attacks on image datasets. Specifically, we propose to use
sentiment analysis for adversarial example detection, qualified by the
progressively manifesting impact of an adversarial perturbation on the
hidden-layer feature maps of a DNN under attack. Accordingly, we design …

adversarial adversarial attacks analysis art attack attacks datasets detection detector latest networks neural networks sentiment analysis state techniques vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Junior Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Associate Director, Operations Compliance and Investigations Management

@ Legend Biotech | Raritan, New Jersey, United States

Analyst, Cyber Operations Engineer

@ BlackRock | SN6-Singapore - 20 Anson Road

Working Student/Intern/Thesis: Hardware based Cybersecurity Training (m/f/d)

@ AVL | Regensburg, DE