Oct. 19, 2022, 2:20 a.m. | Han Xu, Xiaorui Liu, Yuxuan Wan, Jiliang Tang

cs.CR updates on arXiv.org arxiv.org

Fair classification aims to stress the classification models to achieve the
equality (treatment or prediction quality) among different sensitive groups.
However, fair classification can be under the risk of poisoning attacks that
deliberately insert malicious training samples to manipulate the trained
classifiers' performance. In this work, we study the poisoning scenario where
the attacker can insert a small fraction of samples into training data, with
arbitrary sensitive attributes as well as other predictive features. We
demonstrate that the fairly trained …

attacks classification fair poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

IT Security Manager

@ Teltonika | Vilnius/Kaunas, VL, LT

Security Officer - Part Time - Harrah's Gulf Coast

@ Caesars Entertainment | Biloxi, MS, United States

DevSecOps Full-stack Developer

@ Peraton | Fort Gordon, GA, United States

Cybersecurity Cooperation Lead

@ Peraton | Stuttgart, AE, United States

Cybersecurity Engineer - Malware & Forensics

@ ManTech | 201DU - Customer Site,Herndon, VA