Feb. 28, 2024, 5:11 a.m. | Jeffrey G. Wang, Jason Wang, Marvin Li, Seth Neel

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.17012v1 Announce Type: new
Abstract: In this paper we undertake a systematic study of privacy attacks against open source Large Language Models (LLMs), where an adversary has access to either the model weights, gradients, or losses, and tries to exploit them to learn something about the underlying training data. Our headline results are the first membership inference attacks (MIAs) against pre-trained LLMs that are able to simultaneously achieve high TPRs and low FPRs, and a pipeline showing that over $50\%$ …

access adversary arxiv attacks box cs.ai cs.cr cs.lg data data leakage exploit language language models large learn llms losses open source pandora privacy study training training data

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC