April 5, 2024, 4:11 a.m. | Vasisht Duddu, Sebastian Szyller, N. Asokan

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.04542v2 Announce Type: replace
Abstract: Machine learning (ML) models cannot neglect risks to security, privacy, and fairness. Several defenses have been proposed to mitigate such risks. When a defense is effective in mitigating one risk, it may correspond to increased or decreased susceptibility to other risks. Existing research lacks an effective framework to recognize and explain these unintended interactions. We present such a framework, based on the conjecture that overfitting and memorization underlie unintended interactions. We survey existing literature on …

arxiv cs.cr cs.lg defense defenses fairness machine machine learning may privacy research risk risks security

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Application Security Engineer - Remote Friendly

@ Unit21 | San Francisco,CA; New York City; Remote USA;

Cloud Security Specialist

@ AppsFlyer | Herzliya

Malware Analysis Engineer - Canberra, Australia

@ Apple | Canberra, Australian Capital Territory, Australia

Product CISO

@ Fortinet | Sunnyvale, CA, United States

Manager, Security Engineering

@ Thrive | United States - Remote