all InfoSec news
Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk
April 23, 2024, 4:11 a.m. | Zhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang
cs.CR updates on arXiv.org arxiv.org
Abstract: While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information. In this paper, we reveal a new risk, Shake-to-Leak (S2L), that fine-tuning the pre-trained models with manipulated data can amplify the existing privacy risks. We demonstrate that S2L could occur in various standard fine-tuning strategies for diffusion models, including concept-injection methods (DreamBooth and Textual Inversion) …
amplify arxiv can cs.cr cs.lg diffusion models fine-tuning generative leak privacy privacy risk risk
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Security Specialist
@ Nestlé | St. Louis, MO, US, 63164
Cybersecurity Analyst
@ Dana Incorporated | Pune, MH, IN, 411057
Sr. Application Security Engineer
@ CyberCube | United States
Linux DevSecOps Administrator (Remote)
@ Accenture Federal Services | Arlington, VA
Cyber Security Intern or Co-op
@ Langan | Parsippany, NJ, US, 07054-2172
Security Advocate - Application Security
@ Datadog | New York, USA, Remote