Feb. 6, 2024, 5:10 a.m. | Shengwei An Sheng-Yen Chou Kaiyuan Zhang Qiuling Xu Guanhong Tao Guangyu Shen Siyuan Cheng Shi

cs.CR updates on arXiv.org arxiv.org

Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the …

adversarial art attacks backdoor backdoor attacks backdoors cs.ai cs.cr cs.lg data diffusion models distribution elijah generative generative models high images input noise patch quality state studies training trigger vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC