all InfoSec news
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores. (arXiv:2212.00612v1 [cs.LG])
Dec. 2, 2022, 2:10 a.m. | Ziqi Yang, Lijin Wang, Da Yang, Jie Wan, Ziming Zhao, Ee-Chien Chang, Fan Zhang, Kui Ren
cs.CR updates on arXiv.org arxiv.org
Neural networks are susceptible to data inference attacks such as the
membership inference attack, the adversarial model inversion attack and the
attribute inference attack, where the attacker could infer useful information
such as the membership, the reconstruction or the sensitive attributes of a
data sample from the confidence scores predicted by the target classifier. In
this paper, we propose a method, namely PURIFIER, to defend against membership
inference attacks. It transforms the confidence score vectors predicted by the
target classifier …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Senior Manager, Security Compliance (Customer Trust)
@ Box | Tokyo
Cyber Security Engineering Specialist
@ SITEC Consulting | St. Louis, MO, USA 63101
Technical Security Analyst
@ Spire Healthcare | United Kingdom
Embedded Threat Intelligence Team Account Manager
@ Sibylline Ltd | Austin, Texas, United States
Bank Protection Security Officer
@ Allied Universal | Portland, OR, United States