Sept. 19, 2022, 1:20 a.m. | Chunyu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, XiangLong Liu, Aishan Liu

cs.CR updates on arXiv.org arxiv.org

Adversarial training (AT) methods are effective against adversarial attacks,
yet they introduce severe disparity of accuracy and robustness between
different classes, known as the robust fairness problem. Previously proposed
Fair Robust Learning (FRL) adaptively reweights different classes to improve
fairness. However, the performance of the better-performed classes decreases,
leading to a strong performance drop. In this paper, we observed two unfair
phenomena during adversarial training: different difficulties in generating
adversarial examples from each class (source-class fairness) and disparate
target class …

adversarial balance fairness training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Network AWS Cloud &Firewall Engineer

@ Arthur Grand Technologies Inc | Plano, TX, United States

Lead Consultant, Data Centre & BCP

@ Singtel | Singapore, Singapore

Protocol Security Engineer

@ Osmosis Labs | Remote

Technical Engineer - Payments Security Specialist

@ H&M Group | Bengaluru, India

Intern, Security Architecture

@ Sony | Work from Home-CA