Web: http://arxiv.org/abs/2206.10469

June 23, 2022, 1:20 a.m. | Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramer

cs.CR updates on arXiv.org arxiv.org

Machine learning models trained on private datasets have been shown to leak
their private data. While recent work has found that the average data point is
rarely leaked, the outlier samples are frequently subject to memorization and,
consequently, privacy leakage. We demonstrate and analyse an Onion Effect of
memorization: removing the "layer" of outlier points that are most vulnerable
to a privacy attack exposes a new layer of previously-safe points to the same
attack. We perform several experiments to study …

lg onion privacy

More from arxiv.org / cs.CR updates on arXiv.org

Collection Network Penetration Test Engineer TS SCI/Poly Eligible

@ Sixgen Inc. | United States

Senior Infrastructure Security Engineer

@ Angi | Toronto, ON - Remote

Senior Security Operations Engineer

@ Axiom Zen | Remote

Endpoint Protections - Security Research Engineer II

@ Elastic | United States

Senior Cyber Security Engineer

@ Evaluate | London, England, United Kingdom

Device Security Lead

@ Worldcoin | Berlin ; Erlangen ; New York ; San Francisco