April 17, 2023, 1:12 a.m. | Jingyuan Wang, Yufan Wu, Mingxuan Li, Xin Lin, Junjie Wu, Chao Li

cs.CR updates on arXiv.org arxiv.org

While having achieved great success in rich real-life applications, deep
neural network (DNN) models have long been criticized for their vulnerability
to adversarial attacks. Tremendous research efforts have been dedicated to
mitigating the threats of adversarial attacks, but the essential trait of
adversarial examples is not yet clear, and most existing methods are yet
vulnerable to hybrid attacks and suffer from counterattacks. In light of this,
in this paper, we first reveal a gradient-based correlation between sensitivity
analysis-based DNN interpreters …

adversarial adversarial attacks adversary analysis applications attacks correlation counterattacks defense great hybrid life network neural network process research safety threats vulnerability vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC