April 23, 2024, 4:11 a.m. | Zhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.09450v2 Announce Type: replace-cross
Abstract: While diffusion models have recently demonstrated remarkable progress in generating realistic images, privacy risks also arise: published models or APIs could generate training images and thus leak privacy-sensitive training information. In this paper, we reveal a new risk, Shake-to-Leak (S2L), that fine-tuning the pre-trained models with manipulated data can amplify the existing privacy risks. We demonstrate that S2L could occur in various standard fine-tuning strategies for diffusion models, including concept-injection methods (DreamBooth and Textual Inversion) …

amplify arxiv can cs.cr cs.lg diffusion models fine-tuning generative leak privacy privacy risk risk

Cyber Security Engineer

@ ASSYSTEM | Bridgwater, United Kingdom

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)