May 7, 2024, 4:11 a.m. | Peiyu Yang, Naveed Akhtar, Jiantong Jiang, Ajmal Mian

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.02344v1 Announce Type: new
Abstract: Attribution methods compute importance scores for input features to explain the output predictions of deep models. However, accurate assessment of attribution methods is challenged by the lack of benchmark fidelity for attributing model predictions. Moreover, other confounding factors in attribution estimation, including the setup choices of post-processing techniques and explained model predictions, further compromise the reliability of the evaluation. In this work, we first identify a set of fidelity criteria that reliable benchmarks for attribution …

arxiv assessment attribution backdoor benchmark compute cs.ai cs.cr cs.lg evaluation explainable ai features fidelity high input predictions setup

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Security Analyst

@ Oracle | United States

Associate Vulnerability Management Specialist

@ Diebold Nixdorf | Hyderabad, Telangana, India

Cybersecurity Architect, Infrastructure & Technical Security

@ KCB Group | Kenya