May 26, 2023, 1:18 a.m. | Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch

cs.CR updates on arXiv.org arxiv.org

Large language models (LLMs) are excellent in-context learners. However, the
sensitivity of data contained in prompts raises privacy concerns. Our work
first shows that these concerns are valid: we instantiate a simple but highly
effective membership inference attack against the data used to prompt LLMs. To
address this vulnerability, one could forego prompting and resort to
fine-tuning LLMs with known algorithms for private gradient descent. However,
this comes at the expense of the practicality and efficiency offered by
prompting. Therefore, …

attack context data language language models large llms privacy privacy concerns private prompts simple valid work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Associate DevSecOps Engineer

@ LinQuest | Los Angeles, California, United States

DORA Compliance Program Manager

@ Resillion | Brussels, Belgium

Head of Workplace Risk and Compliance

@ Wise | London, United Kingdom