Oct. 16, 2023, 1:10 a.m. | Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu

cs.CR updates on arXiv.org arxiv.org

Fine-tuning is a common and effective method for tailoring large language
models (LLMs) to specialized tasks and applications. In this paper, we study
the privacy implications of fine-tuning LLMs on user data. To this end, we
define a realistic threat model, called user inference, wherein an attacker
infers whether or not a user's data was used for fine-tuning. We implement
attacks for this threat model that require only a small set of samples from a
user (possibly different from the …

applications attacker attacks called data end fine-tuning language language models large llms privacy study threat threat model user data

Intern, Cyber Security Vulnerability Management

@ Grab | Petaling Jaya, Malaysia

Compliance - Global Privacy Office - Associate - Bengaluru

@ Goldman Sachs | Bengaluru, Karnataka, India

Cyber Security Engineer (m/w/d) Operational Technology

@ MAN Energy Solutions | Oberhausen, DE, 46145

Armed Security Officer - Hospital

@ Allied Universal | Sun Valley, CA, United States

Governance, Risk and Compliance Officer (Africa)

@ dLocal | Lagos (Remote)

Junior Cloud DevSecOps Network Engineer

@ Accenture Federal Services | Arlington, VA