Nov. 9, 2022, 2:20 a.m. | Yulu Jin, Lifeng Lai

cs.CR updates on arXiv.org arxiv.org

In this paper, we take a first step towards answering the question of how to
design fair machine learning algorithms that are robust to adversarial attacks.
Using a minimax framework, we aim to design an adversarially robust fair
regression model that achieves optimal performance in the presence of an
attacker who is able to add a carefully designed adversarial data point to the
dataset or perform a rank-one attack on the dataset. By solving the proposed
nonsmooth nonconvex-nonconcave minimax problem, …

adversarial attacks aware fairness

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

EY GDS Internship Program - SAP, Cyber, IT Consultant or Finance Talents with German language

@ EY | Wrocław, DS, PL, 50-086

Security Architect - 100% Remote (REF1604S)

@ Citizant | Chantilly, VA, United States

Network Security Engineer - Firewall admin (f/m/d)

@ Deutsche Börse | Prague, CZ

Junior Cyber Solutions Consultant

@ Dionach | Glasgow, Scotland, United Kingdom

Senior Software Engineer (Cryptography), Bitkey

@ Block | New York City, United States