all InfoSec news
User Inference Attacks on Large Language Models
Feb. 27, 2024, 5:11 a.m. | Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu
cs.CR updates on arXiv.org arxiv.org
Abstract: Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications. In this paper, we study the privacy implications of fine-tuning LLMs on user data. To this end, we consider a realistic threat model, called user inference, wherein an attacker infers whether or not a user's data was used for fine-tuning. We design attacks for performing user inference that require only black-box access to the fine-tuned LLM and …
applications arxiv attacker attacks called cs.cl cs.cr cs.lg data end fine-tuning language language models large llms privacy study threat threat model user data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Security Engineer - Detection and Response
@ Fastly, Inc. | US (Remote)
Application Security Engineer
@ Solidigm | Zapopan, Mexico
Defensive Cyber Operations Engineer-Mid
@ ISYS Technologies | Aurora, CO, United States
Manager, Information Security GRC
@ OneTrust | Atlanta, Georgia
Senior Information Security Analyst | IAM
@ EBANX | Curitiba or São Paulo
Senior Information Security Engineer, Cloud Vulnerability Research
@ Google | New York City, USA; New York, USA