July 10, 2024, 4:18 a.m. | Abdullah Caglar Oksuz, Anisa Halimi, Erman Ayday

cs.CR updates on arXiv.org arxiv.org

arXiv:2302.02162v3 Announce Type: replace-cross
Abstract: Explainable Artificial Intelligence (XAI) aims to uncover the decision-making processes of AI models. However, the data used for such explanations can pose security and privacy risks. Existing literature identifies attacks on machine learning models, including membership inference, model inversion, and model extraction attacks. These attacks target either the model or the training data, depending on the settings and parties involved.
XAI tools can increase the vulnerability of model extraction attacks, which is a …

ai models artificial artificial intelligence arxiv attacks can cs.cr cs.lg data decision explainable ai exploiting extraction intelligence literature machine machine learning machine learning models making model extraction privacy privacy risks processes risks security uncover xai

CNO Software Engineer

@ ManTech | 800K - 17600 E ExpositionDr,Aurora,CO

Associate Engineer I On-site, Bangalore

@ Optiv | Bengaluru

Associate Security Platform Engineer

@ NTT DATA | Bengaluru, India

Senior Software Engineer (OCI)

@ Oracle | Austin, TX, United States

Regional Account Manager

@ Trend Micro | Mumbai

Senior IT Internal Auditor

@ TMX | Toronto - 100 Adelaide St W