Feb. 6, 2024, 5:10 a.m. | Shengwei An Sheng-Yen Chou Kaiyuan Zhang Qiuling Xu Guanhong Tao Guangyu Shen Siyuan Cheng Shi

cs.CR updates on arXiv.org arxiv.org

Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the …

adversarial art attacks backdoor backdoor attacks backdoors cs.ai cs.cr cs.lg data diffusion models distribution elijah generative generative models high images input noise patch quality state studies training trigger vulnerable

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Premium Hub - CoE: Business Process Senior Consultant, SAP Security Role and Authorisations & GRC

@ SAP | Dublin 24, IE, D24WA02

Product Security Response Engineer

@ Intel | CRI - Belen, Heredia

Application Security Architect

@ Uni Systems | Brussels, Brussels, Belgium

Sr Product Security Engineer

@ ServiceNow | Hyderabad, India

Analyst, Cybersecurity & Technology (Initial Application Deadline May 20th, Final Deadline May 31st)

@ FiscalNote | United Kingdom (UK)