Sept. 8, 2022, 1:20 a.m. | Eugenio Lomurno, Matteo matteucci

cs.CR updates on arXiv.org arxiv.org

Nowadays, owners and developers of deep learning models must consider
stringent privacy-preservation rules of their training data, usually
crowd-sourced and retaining sensitive information. The most widely adopted
method to enforce privacy guarantees of a deep learning model nowadays relies
on optimization techniques enforcing differential privacy. According to the
literature, this approach has proven to be a successful defence against several
models' privacy attacks, but its downside is a substantial degradation of the
models' performance. In this work, we compare the …

differential privacy optimization privacy protection utility

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior InfoSec Manager - Risk and Compliance

@ Federal Reserve System | Remote - Virginia

Security Analyst

@ Fortra | Mexico

Incident Responder

@ Babcock | Chester, GB, CH1 6ER

Vulnerability, Access & Inclusion Lead

@ Monzo | Cardiff, London or Remote (UK)

Information Security Analyst

@ Unissant | MD, USA