all InfoSec news
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples. (arXiv:2302.04578v1 [cs.CV])
cs.CR updates on arXiv.org arxiv.org
Diffusion Models (DMs) achieve state-of-the-art performance in generative
tasks, boosting a wave in AI for Art. Despite the success of commercialization,
DMs meanwhile provide tools for copyright violations, where infringers benefit
from illegally using paintings created by human artists to train DMs and
generate novel paintings in a similar style. In this paper, we show that it is
possible to create an image $x'$ that is similar to an image $x$ for human
vision but unrecognizable for DMs. We build …
adversarial art copyright diffusion models dms generative human novel painting performance state tools train