Oct. 31, 2022, 1:20 a.m. | Paul Mangold, Michaël Perrot, Aurélien Bellet, Marc Tommasi

cs.CR updates on arXiv.org arxiv.org

In this work, we theoretically study the impact of differential privacy on
fairness in binary classification. We prove that, given a class of models,
popular group fairness measures are pointwise Lipschitz-continuous with respect
to the parameters of the model. This result is a consequence of a more general
statement on the probability that a decision function makes a negative
prediction conditioned on an arbitrary event (such as membership to a sensitive
group), which may be of independent interest. We use …

certificates classification fairness

Security Engineer

@ Celonis | Munich, Germany

Security Engineer, Cloud Threat Intelligence

@ Google | Reston, VA, USA; Kirkland, WA, USA

IT Security Analyst*

@ EDAG Group | Fulda, Hessen, DE, 36037

Scrum Master/ Agile Project Manager for Information Security (Temporary)

@ Guidehouse | Lagunilla de Heredia

Waste Incident Responder (Tanker Driver)

@ Severn Trent | Derby , England, GB

Risk Vulnerability Analyst w/Clearance - Colorado

@ Rothe | Colorado Springs, CO, United States