all InfoSec news
On the utility and protection of optimization with differential privacy and classic regularization techniques. (arXiv:2209.03175v1 [cs.LG])
Sept. 8, 2022, 1:20 a.m. | Eugenio Lomurno, Matteo matteucci
cs.CR updates on arXiv.org arxiv.org
Nowadays, owners and developers of deep learning models must consider
stringent privacy-preservation rules of their training data, usually
crowd-sourced and retaining sensitive information. The most widely adopted
method to enforce privacy guarantees of a deep learning model nowadays relies
on optimization techniques enforcing differential privacy. According to the
literature, this approach has proven to be a successful defence against several
models' privacy attacks, but its downside is a substantial degradation of the
models' performance. In this work, we compare the …
differential privacy optimization privacy protection utility
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 1 hour ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 1 hour ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Senior InfoSec Manager - Risk and Compliance
@ Federal Reserve System | Remote - Virginia
Security Analyst
@ Fortra | Mexico
Incident Responder
@ Babcock | Chester, GB, CH1 6ER
Vulnerability, Access & Inclusion Lead
@ Monzo | Cardiff, London or Remote (UK)
Information Security Analyst
@ Unissant | MD, USA