all InfoSec news
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples
March 11, 2024, 4:11 a.m. | Eda Yilmaz, Hacer Yalim Keles
cs.CR updates on arXiv.org arxiv.org
Abstract: Knowledge Distillation (KD) facilitates the transfer of discriminative capabilities from an advanced teacher model to a simpler student model, ensuring performance enhancement without compromising accuracy. It is also exploited for model stealing attacks, where adversaries use KD to mimic the functionality of a teacher model. Recent developments in this domain have been influenced by the Stingy Teacher model, which provided empirical analysis showing that sparse outputs can significantly degrade the performance of student models. Addressing …
accuracy advanced adversarial adversaries arxiv attacks capabilities cs.cr cs.cv cs.lg defense examples exploited knowledge mimic performance stealing student transfer
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Security Operations Program Manager
@ Microsoft | Redmond, Washington, United States
Sr. Network Security engineer
@ NXP Semiconductors | Bengaluru (Nagavara)
DevSecOps Engineer
@ RP Pro Services | Washington, District of Columbia, United States
Consultant RSSI H/F
@ Hifield | Sèvres, France
TW Senior Test Automation Engineer (Access Control & Intrusion Systems)
@ Bosch Group | Taipei, Taiwan
Cyber Security, Senior Manager
@ Triton AI Pte Ltd | Singapore, Singapore, Singapore