April 13, 2022, 1:20 a.m. | Changhun Jo, Jy-yong Sohn, Kangwook Lee

cs.CR updates on arXiv.org arxiv.org

Minimizing risk with fairness constraints is one of the popular approaches to
learning a fair classifier. Recent works showed that this approach yields an
unfair classifier if the training set is corrupted. In this work, we study the
minimum amount of data corruption required for a successful flipping attack.
First, we find lower/upper bounds on this quantity and show that these bounds
are tight when the target model is the unique unconstrained risk minimizer.
Second, we propose a computationally efficient …

attacks binary classification fair lg

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

L2-Network Security Administrator

@ Kyndryl | KIN51515 Mumbai (KIN51515) We Work

Head of Cybersecurity Advisory and Architecture

@ CMA CGM | Marseille, FR

Systems Engineers/Cyber Security Engineers/Information Systems Security Engineer

@ KDA Consulting Inc | Herndon, Virginia, United States

R&D DevSecOps Staff Software Development Engineer 1

@ Sopra Steria | Noida, Uttar Pradesh, India