June 2, 2023, 1:10 a.m. | Matthew Jagielski

cs.CR updates on arXiv.org arxiv.org

Canary exposure, introduced in Carlini et al. is frequently used to
empirically evaluate, or audit, the privacy of machine learning model training.
The goal of this note is to provide some intuition on how to interpret canary
exposure, including by relating it to membership inference attacks and
differential privacy.

attacks audit differential privacy exposure machine machine learning model training privacy training

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Product Delivery Associate - Cybersecurity | CyberOps

@ JPMorgan Chase & Co. | NY, United States

Security Ops Infrastructure Engineer (Remote US):

@ RingCentral | Remote, USA

SOC Analyst-1

@ NTT DATA | Bengaluru, India