July 28, 2023, 1:10 a.m. | Avrim Blum, Princewill Okoroafor, Aadirupa Saha, Kevin Stangl

cs.CR updates on arXiv.org arxiv.org

We consider the vulnerability of fairness-constrained learning to small
amounts of malicious noise in the training data. Konstantinov and Lampert
(2021) initiated the study of this question and presented negative results
showing there exist data distributions where for several fairness constraints,
any proper learner will exhibit high vulnerability when group sizes are
imbalanced. Here, we present a more optimistic view, showing that if we allow
randomized classifiers, then the landscape is much more nuanced. For example,
for Demographic Parity we …

constraints data distributions fairness high malicious noise question results study training vulnerability

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Senior Software Engineer, Security

@ Niantic | Zürich, Switzerland

Consultant expert en sécurité des systèmes industriels (H/F)

@ Devoteam | Levallois-Perret, France

Cybersecurity Analyst

@ Bally's | Providence, Rhode Island, United States

Digital Trust Cyber Defense Executive

@ KPMG India | Gurgaon, Haryana, India

Program Manager - Cybersecurity Assessment Services

@ TestPros | Remote (and DMV), DC