all InfoSec news
To share or not to share: What risks would laypeople accept to give sensitive data to differentially-private NLP systems?
March 26, 2024, 4:11 a.m. | Christopher Weiss, Frauke Kreuter, Ivan Habernal
cs.CR updates on arXiv.org arxiv.org
Abstract: Although the NLP community has adopted central differential privacy as a go-to framework for privacy-preserving model training or data sharing, the choice and interpretation of the key parameter, privacy budget $\varepsilon$ that governs the strength of privacy protection, remains largely arbitrary. We argue that determining the $\varepsilon$ value should not be solely in the hands of researchers or system developers, but must also take into account the actual people who share their potentially sensitive data. …
accept arxiv budget community cs.cl cs.cr data data sharing differential privacy framework key model training nlp parameter privacy private risks sensitive sensitive data share sharing strength systems the key training
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 11 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 11 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 11 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Network Security Engineer
@ Meta | Menlo Park, CA | Remote, US
Security Engineer, Investigations - i3
@ Meta | Washington, DC
Threat Investigator- Security Analyst
@ Meta | Menlo Park, CA | Seattle, WA | Washington, DC
Security Operations Engineer II
@ Microsoft | Redmond, Washington, United States
Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Bug Bounty -- Associate -- Dallas
@ Goldman Sachs | Dallas, Texas, United States