all InfoSec news
Membership Inference Attacks and Privacy in Topic Modeling
March 8, 2024, 5:11 a.m. | Nico Manzonelli, Wanrong Zhang, Salil Vadhan
cs.CR updates on arXiv.org arxiv.org
Abstract: Recent research shows that large language models are susceptible to privacy attacks that infer aspects of the training data. However, it is unclear if simpler generative models, like topic models, share similar vulnerabilities. In this work, we propose an attack against topic models that can confidently identify members of the training data in Latent Dirichlet Allocation. Our results suggest that the privacy risks associated with generative modeling are not restricted to large neural models. Additionally, …
arxiv attack attacks can cs.cl cs.cr cs.lg data generative generative models identify language language models large modeling privacy research share topic training training data vulnerabilities work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Security Specialist
@ Nestlé | St. Louis, MO, US, 63164
Cybersecurity Analyst
@ Dana Incorporated | Pune, MH, IN, 411057
Sr. Application Security Engineer
@ CyberCube | United States
Linux DevSecOps Administrator (Remote)
@ Accenture Federal Services | Arlington, VA
Cyber Security Intern or Co-op
@ Langan | Parsippany, NJ, US, 07054-2172
Security Advocate - Application Security
@ Datadog | New York, USA, Remote