May 8, 2023, 1:10 a.m. | Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi, Michal Irani

cs.CR updates on arXiv.org arxiv.org

Reconstructing samples from the training set of trained neural networks is a
major privacy concern. Haim et al. (2022) recently showed that it is possible
to reconstruct training samples from neural network binary classifiers, based
on theoretical results about the implicit bias of gradient methods. In this
work, we present several improvements and new insights over this previous work.
As our main improvement, we show that training-data reconstruction is possible
in the multi-class setting and that the reconstruction quality is …

bias binary data major network networks neural network neural networks privacy privacy concern results training work

Cryptography Software Developer

@ Intel | USA - AZ - Chandler

Lead Consultant, Geology

@ WSP | Richmond, VA, United States

BISO Cybersecurity Director

@ ABM Industries | Alpharetta, GA, United States

TTECH Analista de ciberseguridad

@ Telefónica | LIMA, PE

TRANSCOM IGC - Cloud Security Engineer

@ IT Partners, Inc | St. Louis, Missouri, United States

Sr Cyber Threat Hunt Researcher

@ Peraton | Beltsville, MD, United States