all InfoSec news
Knowledge Distillation-Based Model Extraction Attack using Private Counterfactual Explanations
April 5, 2024, 4:11 a.m. | Fatima Ezzeddine, Omran Ayoub, Silvia Giordano
cs.CR updates on arXiv.org arxiv.org
Abstract: In recent years, there has been a notable increase in the deployment of machine learning (ML) models as services (MLaaS) across diverse production software applications. In parallel, explainable AI (XAI) continues to evolve, addressing the necessity for transparency and trustworthiness in ML models. XAI techniques aim to enhance the transparency of ML models by providing insights, in terms of the model's explanations, into their decision-making process. Simultaneously, some MLaaS platforms now offer explanations alongside the …
aim applications arxiv attack cs.ai cs.cr cs.cy cs.lg deployment explainable ai extraction knowledge machine machine learning ml models model extraction parallel private production services software software applications techniques transparency trustworthiness xai
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Corporate Intern - Information Security (Year Round)
@ Associated Bank | US WI Remote
Senior Offensive Security Engineer
@ CoStar Group | US-DC Washington, DC