all InfoSec news
Knowledge Distillation-Based Model Extraction Attack using Private Counterfactual Explanations
April 5, 2024, 4:11 a.m. | Fatima Ezzeddine, Omran Ayoub, Silvia Giordano
cs.CR updates on arXiv.org arxiv.org
Abstract: In recent years, there has been a notable increase in the deployment of machine learning (ML) models as services (MLaaS) across diverse production software applications. In parallel, explainable AI (XAI) continues to evolve, addressing the necessity for transparency and trustworthiness in ML models. XAI techniques aim to enhance the transparency of ML models by providing insights, in terms of the model's explanations, into their decision-making process. Simultaneously, some MLaaS platforms now offer explanations alongside the …
aim applications arxiv attack cs.ai cs.cr cs.cy cs.lg deployment explainable ai extraction knowledge machine machine learning ml models model extraction parallel private production services software software applications techniques transparency trustworthiness xai
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Intern, Cyber Security Vulnerability Management
@ Grab | Petaling Jaya, Malaysia
Compliance - Global Privacy Office - Associate - Bengaluru
@ Goldman Sachs | Bengaluru, Karnataka, India
Cyber Security Engineer (m/w/d) Operational Technology
@ MAN Energy Solutions | Oberhausen, DE, 46145
Armed Security Officer - Hospital
@ Allied Universal | Sun Valley, CA, United States
Governance, Risk and Compliance Officer (Africa)
@ dLocal | Lagos (Remote)
Junior Cloud DevSecOps Network Engineer
@ Accenture Federal Services | Arlington, VA