Feb. 7, 2024, 5:10 a.m. | Baihe Huang Zhao Song Runzhou Tao Junze Yin Ruizhe Zhang Danyang Zhuo

cs.CR updates on arXiv.org arxiv.org

Training neural networks usually require large numbers of sensitive training data, and how to protect the privacy of training data has thus become a critical topic in deep learning research. InstaHide is a state-of-the-art scheme to protect training data privacy with only minor effects on test accuracy, and its security has become a salient question. In this paper, we systematically study recent attacks on InstaHide and present a unified framework to understand and analyze these attacks. We find that existing …

accuracy art complexity critical cs.cc cs.cr cs.ds cs.lg data data privacy deep learning images large networks neural networks numbers privacy private protect research sample security sensitive state stat.ml test topic training training data

Embedded VSOC Analyst

@ Sibylline Ltd | Australia, Australia

Cloud Security Platform Engineer

@ Google | London, UK; United Kingdom

Senior Associate Cybersecurity GRC - FedRAMP

@ Workday | USA, VA, McLean

Senior Incident Response Consultant, Mandiant, Google Cloud

@ Google | Mexico; Colombia

Cyber Software Engineering, Advisor

@ Peraton | Fort Gordon, GA, United States

Cloud Security Architect (Federal)

@ Moveworks | Remote, USA