Nov. 7, 2022, 2:20 a.m. | Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri

cs.CR updates on arXiv.org arxiv.org

The wide adoption and application of Masked language models~(MLMs) on
sensitive data (from legal to medical) necessitates a thorough quantitative
investigation into their privacy vulnerabilities -- to what extent do MLMs leak
information about their training data? Prior attempts at measuring leakage of
MLMs via membership inference attacks have been inconclusive, implying the
potential robustness of MLMs to privacy attacks. In this work, we posit that
prior attempts were inconclusive because they based their attack solely on the
MLM's model …

attacks language privacy risks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer, Infrastructure Protection

@ Google | Hyderabad, Telangana, India

Senior Security Software Engineer

@ Microsoft | London, London, United Kingdom

Consultor Ciberseguridad (Cadiz)

@ Capgemini | Cádiz, M, ES

Cyber MS MDR - Sr Associate

@ KPMG India | Bengaluru, Karnataka, India

Privacy Engineer, Google Cloud Privacy

@ Google | Pittsburgh, PA, USA; Raleigh, NC, USA