April 5, 2024, 4:11 a.m. | Vasisht Duddu, Sebastian Szyller, N. Asokan

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.04542v2 Announce Type: replace
Abstract: Machine learning (ML) models cannot neglect risks to security, privacy, and fairness. Several defenses have been proposed to mitigate such risks. When a defense is effective in mitigating one risk, it may correspond to increased or decreased susceptibility to other risks. Existing research lacks an effective framework to recognize and explain these unintended interactions. We present such a framework, based on the conjecture that overfitting and memorization underlie unintended interactions. We survey existing literature on …

arxiv cs.cr cs.lg defense defenses fairness machine machine learning may privacy research risk risks security

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Information Systems Security Manager

@ Bank of America | USA, MD, Fort Meade (6910 Cooper Ave)

Security Engineer

@ EY | Bengaluru, KA, IN, 560048