Feb. 8, 2024, 5:10 a.m. | Sanjari Srivastava Piotr Mardziel Zhikhun Zhang Archana Ahlawat Anupam Datta John C Mitchell

cs.CR updates on arXiv.org arxiv.org

Fairness and privacy are two important values machine learning (ML) practitioners often seek to operationalize in models. Fairness aims to reduce model bias for social/demographic sub-groups. Privacy via differential privacy (DP) mechanisms, on the other hand, limits the impact of any individual's training data on the resulting model. The trade-offs between privacy and fairness goals of trustworthy ML pose a challenge to those wishing to address both. We show that DP amplifies gender, racial, and religious bias when fine-tuning large …

bias cs.cr cs.cy cs.lg data differential privacy fairness fine-tuning impact important language machine machine learning privacy social stat.me trade trade-offs training training data

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote