Feb. 8, 2024, 5:10 a.m. | Sanjari Srivastava Piotr Mardziel Zhikhun Zhang Archana Ahlawat Anupam Datta John C Mitchell

cs.CR updates on arXiv.org arxiv.org

Fairness and privacy are two important values machine learning (ML) practitioners often seek to operationalize in models. Fairness aims to reduce model bias for social/demographic sub-groups. Privacy via differential privacy (DP) mechanisms, on the other hand, limits the impact of any individual's training data on the resulting model. The trade-offs between privacy and fairness goals of trustworthy ML pose a challenge to those wishing to address both. We show that DP amplifies gender, racial, and religious bias when fine-tuning large …

bias cs.cr cs.cy cs.lg data differential privacy fairness fine-tuning impact important language machine machine learning privacy social stat.me trade trade-offs training training data

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Issues Management & Risk Treatment Sr. Consultant

@ Northern Trust | Tempe, AZ Building 2190

Dir. DDIT ISC Enterprise Architecture AppSec

@ Novartis | Hyderabad (Office)

System Access Management Manager

@ Ingram Micro | CA - Irvine, HQ

Oracle Linux Systems Administrator

@ Leidos | 1662 Intelligence Community Campus - Bethesda MD

Senior Systems Engineer - AWS

@ CACI International Inc | 999 REMOTE