all InfoSec news
On the Vulnerability of Fairness Constrained Learning to Malicious Noise. (arXiv:2307.11892v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
We consider the vulnerability of fairness-constrained learning to small
amounts of malicious noise in the training data. Konstantinov and Lampert
(2021) initiated the study of this question and presented negative results
showing there exist data distributions where for several fairness constraints,
any proper learner will exhibit high vulnerability when group sizes are
imbalanced. Here, we present a more optimistic view, showing that if we allow
randomized classifiers, then the landscape is much more nuanced. For example,
for Demographic Parity we …
constraints data distributions fairness high malicious noise question results study training vulnerability