April 5, 2023, 1:10 a.m. | Ngoc-Bao Nguyen, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man Cheung

cs.CR updates on arXiv.org arxiv.org

Model inversion (MI) attacks aim to infer and reconstruct private training
data by abusing access to a model. MI attacks have raised concerns about the
leaking of sensitive information (e.g. private face images used in training a
face recognition system). Recently, several algorithms for MI have been
proposed to improve the attack performance. In this work, we revisit MI, study
two fundamental issues pertaining to all state-of-the-art (SOTA) MI algorithms,
and propose solutions to these issues which lead to a …

abusing access aim algorithms art attack attacks data face recognition images information networks neural networks performance private recognition sensitive information solutions state study system thinking training work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA