May 26, 2023, 1:18 a.m. | Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch

cs.CR updates on arXiv.org arxiv.org

Large language models (LLMs) are excellent in-context learners. However, the
sensitivity of data contained in prompts raises privacy concerns. Our work
first shows that these concerns are valid: we instantiate a simple but highly
effective membership inference attack against the data used to prompt LLMs. To
address this vulnerability, one could forego prompting and resort to
fine-tuning LLMs with known algorithms for private gradient descent. However,
this comes at the expense of the practicality and efficiency offered by
prompting. Therefore, …

attack context data language language models large llms privacy privacy concerns private prompts simple valid work

More from arxiv.org / cs.CR updates on arXiv.org

Toronto Transit Commission (TTC) - Chief Information Security Officer (CISO)

@ BIPOC Executive Search Inc. | Toronto, Ontario, Canada

Unit Manager for Cyber Security Culture & Competence

@ H&M Group | Stockholm, Sweden

Junior Security Engineer

@ Pipedrive | Tallinn, Estonia

Splunk Engineer (TS/SCI)

@ GuidePoint Security LLC | Huntsville, AL

DevSecOps Engineer, SRE (Top Secret) - 1537

@ Reinventing Geospatial (RGi) | Herndon, VA

Governance, Risk and Compliance (GRC) Lead

@ Leidos | Brisbane, Australia