all InfoSec news
Differentially Private Post-Processing for Fair Regression
May 8, 2024, 4:11 a.m. | Ruicheng Xian, Qiaobo Li, Gautam Kamath, Han Zhao
cs.CR updates on arXiv.org arxiv.org
Abstract: This paper describes a differentially private post-processing algorithm for learning fair regressors satisfying statistical parity, addressing privacy concerns of machine learning models trained on sensitive data, as well as fairness concerns of their potential to propagate historical biases. Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs. It consists of three steps: first, the output distributions are estimated privately via histogram density estimation and the Laplace mechanism, …
algorithm arxiv biases can cs.cr cs.cy cs.lg data fair fairness machine machine learning machine learning models privacy privacy concerns private process sensitive sensitive data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Corporate Intern - Information Security (Year Round)
@ Associated Bank | US WI Remote
Senior Offensive Security Engineer
@ CoStar Group | US-DC Washington, DC