all InfoSec news
When Does Differentially Private Learning Not Suffer in High Dimensions?. (arXiv:2207.00160v3 [cs.LG] UPDATED)
Aug. 16, 2022, 1:20 a.m. | Xuechen Li, Daogao Liu, Tatsunori Hashimoto, Huseyin A. Inan, Janardhan Kulkarni, Yin Tat Lee, Abhradeep Guha Thakurta
cs.CR updates on arXiv.org arxiv.org
Large pretrained models can be privately fine-tuned to achieve performance
approaching that of non-private models. A common theme in these results is the
surprising observation that high-dimensional models can achieve favorable
privacy-utility trade-offs. This seemingly contradicts known results on the
model-size dependence of differentially private convex learning and raises the
following research question: When does the performance of differentially
private learning not degrade with increasing model size? We identify that the
magnitudes of gradients projected onto subspaces is a key …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Senior Software Engineer, Security
@ Niantic | Zürich, Switzerland
Consultant expert en sécurité des systèmes industriels (H/F)
@ Devoteam | Levallois-Perret, France
Cybersecurity Analyst
@ Bally's | Providence, Rhode Island, United States
Digital Trust Cyber Defense Executive
@ KPMG India | Gurgaon, Haryana, India
Program Manager - Cybersecurity Assessment Services
@ TestPros | Remote (and DMV), DC