all InfoSec news
Re-thinking Model Inversion Attacks Against Deep Neural Networks. (arXiv:2304.01669v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Model inversion (MI) attacks aim to infer and reconstruct private training
data by abusing access to a model. MI attacks have raised concerns about the
leaking of sensitive information (e.g. private face images used in training a
face recognition system). Recently, several algorithms for MI have been
proposed to improve the attack performance. In this work, we revisit MI, study
two fundamental issues pertaining to all state-of-the-art (SOTA) MI algorithms,
and propose solutions to these issues which lead to a …
abusing access aim algorithms art attack attacks data face recognition images information networks neural networks performance private recognition sensitive information solutions state study system thinking training work