all InfoSec news
To share or not to share: What risks would laypeople accept to give sensitive data to differentially-private NLP systems?
March 26, 2024, 4:11 a.m. | Christopher Weiss, Frauke Kreuter, Ivan Habernal
cs.CR updates on arXiv.org arxiv.org
Abstract: Although the NLP community has adopted central differential privacy as a go-to framework for privacy-preserving model training or data sharing, the choice and interpretation of the key parameter, privacy budget $\varepsilon$ that governs the strength of privacy protection, remains largely arbitrary. We argue that determining the $\varepsilon$ value should not be solely in the hands of researchers or system developers, but must also take into account the actual people who share their potentially sensitive data. …
accept arxiv budget community cs.cl cs.cr data data sharing differential privacy framework key model training nlp parameter privacy private risks sensitive sensitive data share sharing strength systems the key training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Corporate Intern - Information Security (Year Round)
@ Associated Bank | US WI Remote
Senior Offensive Security Engineer
@ CoStar Group | US-DC Washington, DC