March 2, 2023, 2:10 a.m. | Fan Wang, Adams Wai-Kin Kong

cs.CR updates on arXiv.org arxiv.org

Model attribution is a critical component of deep neural networks (DNNs) for
its interpretability to complex models. Recent studies bring up attention to
the security of attribution methods as they are vulnerable to attribution
attacks that generate similar images with dramatically different attributions.
Existing works have been investigating empirically improving the robustness of
DNNs against those attacks; however, none of them explicitly quantifies the
actual deviations of attributions. In this work, for the first time, a
constrained optimization problem is …

attacks attention attribution case critical images networks neural networks robustness security studies vulnerable

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité

@ Hifield | Sèvres, France

Infrastructure Consultant

@ Telefonica Tech | Belfast, United Kingdom