all InfoSec news
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks. (arXiv:2203.03929v2 [cs.LG] UPDATED)
Nov. 7, 2022, 2:20 a.m. | Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri
cs.CR updates on arXiv.org arxiv.org
The wide adoption and application of Masked language models~(MLMs) on
sensitive data (from legal to medical) necessitates a thorough quantitative
investigation into their privacy vulnerabilities -- to what extent do MLMs leak
information about their training data? Prior attempts at measuring leakage of
MLMs via membership inference attacks have been inconclusive, implying the
potential robustness of MLMs to privacy attacks. In this work, we posit that
prior attempts were inconclusive because they based their attack solely on the
MLM's model …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer, Infrastructure Protection
@ Google | Hyderabad, Telangana, India
Senior Security Software Engineer
@ Microsoft | London, London, United Kingdom
Consultor Ciberseguridad (Cadiz)
@ Capgemini | Cádiz, M, ES
Cyber MS MDR - Sr Associate
@ KPMG India | Bengaluru, Karnataka, India
Privacy Engineer, Google Cloud Privacy
@ Google | Pittsburgh, PA, USA; Raleigh, NC, USA