all InfoSec news
Considerations for Differentially Private Learning with Large-Scale Public Pretraining. (arXiv:2212.06470v1 [cs.LG])
Dec. 14, 2022, 2:10 a.m. | Florian Tramèr, Gautam Kamath, Nicholas Carlini
cs.CR updates on arXiv.org arxiv.org
The performance of differentially private machine learning can be boosted
significantly by leveraging the transfer learning capabilities of non-private
models pretrained on large public datasets. We critically review this approach.
We primarily question whether the use of large Web-scraped datasets should be
viewed as differential-privacy-preserving. We caution that publicizing these
models pretrained on Web data as "private" could lead to harm and erode the
public's trust in differential privacy as a meaningful definition of privacy.
Beyond the privacy considerations of …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Azure Security Architect
@ First Quality | Remote US - Eastern or Central Timezone
Staff Security Researcher (Network Protocols)
@ Palo Alto Networks | Santa Clara, CA, United States
Senior Product Manager - Endpoint Security
@ Ivanti | Bengaluru, India
Penetration Tester
@ Lostar | İstanbul, Türkiye