all InfoSec news
Why You Should Not Trust Interpretations in Machine Learning: Adversarial Attacks on Partial Dependence Plots
May 2, 2024, 4:11 a.m. | Xi Xin, Giles Hooker, Fei Huang
cs.CR updates on arXiv.org arxiv.org
Abstract: The adoption of artificial intelligence (AI) across industries has led to the widespread use of complex black-box models and interpretation tools for decision making. This paper proposes an adversarial framework to uncover the vulnerability of permutation-based interpretation methods for machine learning tasks, with a particular focus on partial dependence (PD) plots. This adversarial framework modifies the original black box model to manipulate its predictions for instances in the extrapolation domain. As a result, it produces …
adoption adversarial adversarial attacks artificial artificial intelligence arxiv attacks box cs.cr cs.lg decision decision making framework industries intelligence led machine machine learning making partial stat.ap stat.ml tools trust uncover vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368
@ Allen Integrated Solutions | Chantilly, Virginia, United States
Consultant Sécurité SI H/F Gouvernance - Risques - Conformité
@ Hifield | Sèvres, France
Infrastructure Consultant
@ Telefonica Tech | Belfast, United Kingdom