March 5, 2024, 3:12 p.m. | Jan Aalmoes, Vasisht Duddu, Antoine Boutet

cs.CR updates on arXiv.org arxiv.org

arXiv:2211.10209v2 Announce Type: replace-cross
Abstract: Machine learning (ML) models have been deployed for high-stakes applications, e.g., healthcare and criminal justice. Prior work has shown that ML models are vulnerable to attribute inference attacks where an adversary, with some background knowledge, trains an ML attack model to infer sensitive attributes by exploiting distinguishable model predictions. However, some prior attribute inference attacks have strong assumptions about adversary's background knowledge (e.g., marginal distribution of sensitive attribute) and pose no more privacy risk than …

adversary applications arxiv attack attacks attributes blackbox criminal cs.cr cs.lg exploiting fairness healthcare high justice knowledge machine machine learning ml models sensitive trains vulnerable work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Premium Hub - CoE: Business Process Senior Consultant, SAP Security Role and Authorisations & GRC

@ SAP | Dublin 24, IE, D24WA02

Product Security Response Engineer

@ Intel | CRI - Belen, Heredia

Application Security Architect

@ Uni Systems | Brussels, Brussels, Belgium

Sr Product Security Engineer

@ ServiceNow | Hyderabad, India

Analyst, Cybersecurity & Technology (Initial Application Deadline May 20th, Final Deadline May 31st)

@ FiscalNote | United Kingdom (UK)