all InfoSec news
User Inference Attacks on Large Language Models. (arXiv:2310.09266v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Fine-tuning is a common and effective method for tailoring large language
models (LLMs) to specialized tasks and applications. In this paper, we study
the privacy implications of fine-tuning LLMs on user data. To this end, we
define a realistic threat model, called user inference, wherein an attacker
infers whether or not a user's data was used for fine-tuning. We implement
attacks for this threat model that require only a small set of samples from a
user (possibly different from the …
applications attacker attacks called data end fine-tuning language language models large llms privacy study threat threat model user data