May 8, 2023, 1:10 a.m. | Yulong Wang, Tianxiang Li, Shenghong Li, Xin Yuan, Wei Ni

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, while
adversarial attack models, e.g., DeepFool, are on the rise and outrunning
adversarial example detection techniques. This paper presents a new adversarial
example detector that outperforms state-of-the-art detectors in identifying the
latest adversarial attacks on image datasets. Specifically, we propose to use
sentiment analysis for adversarial example detection, qualified by the
progressively manifesting impact of an adversarial perturbation on the
hidden-layer feature maps of a DNN under attack. Accordingly, we design …

adversarial adversarial attacks analysis art attack attacks datasets detection detector latest networks neural networks sentiment analysis state techniques vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)