May 7, 2024, 4:11 a.m. | Liangqi Lei, Keke Gai, Jing Yu, Liehuang Zhu

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.02696v1 Announce Type: new
Abstract: Latent Diffusion Models (LDMs) enable a wide range of applications but raise ethical concerns regarding illegal utilization.Adding watermarks to generative model outputs is a vital technique employed for copyright tracking and mitigating potential risks associated with AI-generated content. However, post-hoc watermarking techniques are susceptible to evasion. Existing watermarking methods for LDMs can only embed fixed messages. Watermark message alteration requires model retraining. The stability of the watermark is influenced by model updates and iterations. Furthermore, …

applications arxiv copyright cs.cr diffusion models enable ethical generated generative illegal risks techniques tracking watermarking watermarks

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Security Operations Vice President - Content Developer

@ JPMorgan Chase & Co. | Jersey City, NJ, United States

Computer and Forensics Investigator

@ ManTech | 221BQ - Cstmr Site,Springfield,VA

Senior Security Analyst

@ Oracle | United States