May 31, 2023, 1:10 a.m. | Krishna Pillutla, Galen Andrew, Peter Kairouz, H. Brendan McMahan, Alina Oprea, Sewoong Oh

cs.CR updates on arXiv.org arxiv.org

We present a rigorous methodology for auditing differentially private machine
learning algorithms by adding multiple carefully designed examples called
canaries. We take a first principles approach based on three key components.
First, we introduce Lifted Differential Privacy (LiDP) that expands the
definition of differential privacy to handle randomized datasets. This gives us
the freedom to design randomized canaries. Second, we audit LiDP by trying to
distinguish between the model trained with $K$ canaries versus $K - 1$ canaries
in the …

algorithms auditing called components definition differential privacy key machine machine learning machine learning algorithms power principles privacy private randomization

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Security Cloud Solution Architect

@ Microsoft | London, London, United Kingdom

Compliance Program Analyst

@ SailPoint | United States

Software Engineer III, Infrastructure, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Cryptography Expert

@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine

Senior Cyber Intelligence Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States