Feb. 6, 2024, 5:10 a.m. | Shengwei An Sheng-Yen Chou Kaiyuan Zhang Qiuling Xu Guanhong Tao Guangyu Shen Siyuan Cheng Shi

cs.CR updates on arXiv.org arxiv.org

Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the …

adversarial art attacks backdoor backdoor attacks backdoors cs.ai cs.cr cs.lg data diffusion models distribution elijah generative generative models high images input noise patch quality state studies training trigger vulnerable

Deputy Chief Information Security Officer

@ United States Holocaust Memorial Museum | Washington, DC

Humbly Confident Security Lead

@ YNAB | Remote

Information Technology Specialist II: Information Security Engineer

@ WBCP, Inc. | Pasadena, CA.

Director of the Air Force Cyber Technical Center of Excellence (CyTCoE)

@ Air Force Institute of Technology | Dayton, OH, USA

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

IT-Security Analyst "Managed Cloud" Fokus MS-Sentinel (m/w/d)*

@ GISA GmbH | Halle, DE