March 17, 2023, 1:10 a.m. | Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying

cs.CR updates on arXiv.org arxiv.org

Recently, there has been an increasing adoption of differential privacy
guided algorithms for privacy-preserving machine learning tasks. However, the
use of such algorithms comes with trade-offs in terms of algorithmic fairness,
which has been widely acknowledged. Specifically, we have empirically observed
that the classical collaborative filtering method, trained by differentially
private stochastic gradient descent (DP-SGD), results in a disparate impact on
user groups with respect to different user engagement levels. This, in turn,
causes the original unfair model to become …

adoption algorithms aware differential privacy engagement fairness impact machine machine learning privacy private respect results terms trade turn

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Deputy Chief Information Security Officer

@ City of Philadelphia | Philadelphia, PA, United States

Global Cybersecurity Expert

@ CMA CGM | Mumbai, IN

Senior Security Operations Engineer

@ EarnIn | Mexico

Cyber Technologist (Sales Engineer)

@ Darktrace | London