all InfoSec news
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. (arXiv:2305.04175v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
With the help of conditioning mechanisms, the state-of-the-art diffusion
models have achieved tremendous success in guided image generation,
particularly in text-to-image synthesis. To gain a better understanding of the
training process and potential risks of text-to-image synthesis, we perform a
systematic investigation of backdoor attack on text-to-image diffusion models
and propose BadT2I, a general multimodal backdoor attack framework that tampers
with image synthesis in diverse semantic levels. Specifically, we perform
backdoor attacks on three levels of the vision semantics: Pixel-Backdoor, …
art attack backdoor data data poisoning diffusion models image generation investigation poisoning process risks state text training understanding