all InfoSec news
Improving Robust Fairness via Balance Adversarial Training. (arXiv:2209.07534v1 [cs.LG])
Sept. 19, 2022, 1:20 a.m. | Chunyu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, XiangLong Liu, Aishan Liu
cs.CR updates on arXiv.org arxiv.org
Adversarial training (AT) methods are effective against adversarial attacks,
yet they introduce severe disparity of accuracy and robustness between
different classes, known as the robust fairness problem. Previously proposed
Fair Robust Learning (FRL) adaptively reweights different classes to improve
fairness. However, the performance of the better-performed classes decreases,
leading to a strong performance drop. In this paper, we observed two unfair
phenomena during adversarial training: different difficulties in generating
adversarial examples from each class (source-class fairness) and disparate
target class …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Network AWS Cloud &Firewall Engineer
@ Arthur Grand Technologies Inc | Plano, TX, United States
Lead Consultant, Data Centre & BCP
@ Singtel | Singapore, Singapore
Protocol Security Engineer
@ Osmosis Labs | Remote
Technical Engineer - Payments Security Specialist
@ H&M Group | Bengaluru, India
Intern, Security Architecture
@ Sony | Work from Home-CA