all InfoSec news
Investigating and Defending Shortcut Learning in Personalized Diffusion Models
June 28, 2024, 4:20 a.m. | Yixin Liu, Ruoxi Chen, Lichao Sun
cs.CR updates on arXiv.org arxiv.org
Abstract: Personalized diffusion models have gained popularity for adapting pre-trained text-to-image models to generate images of specific topics with only a few images. However, recent studies find that these models are vulnerable to minor adversarial perturbation, and the fine-tuning performance is largely degraded on corrupted datasets. Such characteristics are further exploited to craft protective perturbation on sensitive images like portraits that prevent unauthorized generation. In response, diffusion-based purification methods have been proposed to remove these perturbations …
adversarial arxiv cs.ai cs.cr cs.cv datasets defending diffusion models find fine-tuning image images performance shortcut studies text topics vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Kirchhoff Meets Johnson: In Pursuit of Unconditionally Secure Communication
2 days, 15 hours ago |
arxiv.org
Understanding Routing-Induced Censorship Changes Globally
2 days, 15 hours ago |
arxiv.org
Investigating and Defending Shortcut Learning in Personalized Diffusion Models
2 days, 15 hours ago |
arxiv.org
Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing
2 days, 15 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
Technical Product Engineer
@ Palo Alto Networks | Tel Aviv-Yafo, Israel
Azure Cloud Architect
@ Version 1 | Dublin, Ireland
Junior Pen Tester
@ Vertiv | Pune, India
Information Security GRC Director
@ IQ-EQ | Hyderabad, India
Senior Technical Analyst
@ Fidelity International | Gurgaon Office
Security Engineer II
@ Microsoft | Redmond, Washington, United States