all InfoSec news
Fairness-aware Differentially Private Collaborative Filtering. (arXiv:2303.09527v1 [cs.IR])
cs.CR updates on arXiv.org arxiv.org
Recently, there has been an increasing adoption of differential privacy
guided algorithms for privacy-preserving machine learning tasks. However, the
use of such algorithms comes with trade-offs in terms of algorithmic fairness,
which has been widely acknowledged. Specifically, we have empirically observed
that the classical collaborative filtering method, trained by differentially
private stochastic gradient descent (DP-SGD), results in a disparate impact on
user groups with respect to different user engagement levels. This, in turn,
causes the original unfair model to become …
adoption algorithms aware differential privacy engagement fairness impact machine machine learning privacy private respect results terms trade turn