all InfoSec news
Model Conversion via Differentially Private Data-Free Distillation. (arXiv:2304.12528v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
While massive valuable deep models trained on large-scale data have been
released to facilitate the artificial intelligence community, they may
encounter attacks in deployment which leads to privacy leakage of training
data. In this work, we propose a learning approach termed differentially
private data-free distillation (DPDFD) for model conversion that can convert a
pretrained model (teacher) into its privacy-preserving counterpart (student)
via an intermediate generator without access to training data. The learning
collaborates three parties in a unified way. First, …
access artificial artificial intelligence attacks community data deployment free generated generator intelligence intelligence community large may privacy private private data scale student synthetic synthetic data training work