March 17, 2023, 1:10 a.m. | Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying

cs.CR updates on arXiv.org arxiv.org

Recently, there has been an increasing adoption of differential privacy
guided algorithms for privacy-preserving machine learning tasks. However, the
use of such algorithms comes with trade-offs in terms of algorithmic fairness,
which has been widely acknowledged. Specifically, we have empirically observed
that the classical collaborative filtering method, trained by differentially
private stochastic gradient descent (DP-SGD), results in a disparate impact on
user groups with respect to different user engagement levels. This, in turn,
causes the original unfair model to become …

adoption algorithms aware differential privacy engagement fairness impact machine machine learning privacy private respect results terms trade turn

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)