all InfoSec news
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles. (arXiv:2304.03388v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Deep Neural Networks (DNNs) have become ubiquitous due to their performance
on prediction and classification problems. However, they face a variety of
threats as their usage spreads. Model extraction attacks, which steal DNNs,
endanger intellectual property, data privacy, and security. Previous research
has shown that system-level side-channels can be used to leak the architecture
of a victim DNN, exacerbating these risks. We propose two DNN architecture
extraction techniques catering to various threat models. The first technique
uses a malicious, dynamically …
architecture attack attacks classification data data privacy gpu intellectual property leak malicious networks neural networks performance prediction privacy problems profiles pytorch research risks security steal system techniques threat threat models threats version victim