all InfoSec news
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against White-Box Models. (arXiv:2302.02162v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Explainable Artificial Intelligence (XAI) encompasses a range of techniques
and procedures aimed at elucidating the decision-making processes of AI models.
While XAI is valuable in understanding the reasoning behind AI models, the data
used for such revelations poses potential security and privacy vulnerabilities.
Existing literature has identified privacy risks targeting machine learning
models, including membership inference, model inversion, and model extraction
attacks. Depending on the settings and parties involved, such attacks may
target either the model itself or the training …
ai models artificial artificial intelligence attacks box data decision exploiting intelligence literature making privacy procedures processes reasoning security techniques techniques and procedures understanding vulnerabilities xai