all InfoSec news
Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models
April 5, 2024, 4:10 a.m. | Jiachen Ma, Anda Cao, Zhiqing Xiao, Jie Zhang, Chao Ye, Junbo Zhao
cs.CR updates on arXiv.org arxiv.org
Abstract: The fast advance of the image generation community has attracted attention worldwide. The safety issue needs to be further scrutinized and studied. There have been a few works around this area mostly achieving a post-processing design, model-specific, or yielding suboptimal image quality generation. Despite that, in this article, we discover a black-box attack method that enjoys three merits. It enables (i)-attacks both directed and semantic-driven that theoretically and practically pose a hazard to this vast …
adversarial adversarial attack area arxiv attack attention community cs.ai cs.cr design diffusion models fast image image generation issue jailbreaking prompt quality safety
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Security Engineer II- Full stack Java with React
@ JPMorgan Chase & Co. | Hyderabad, Telangana, India
Cybersecurity SecOps
@ GFT Technologies | Mexico City, MX, 11850
Senior Information Security Advisor
@ Sun Life | Sun Life Toronto One York
Contract Special Security Officer (CSSO) - Top Secret Clearance
@ SpaceX | Hawthorne, CA
Early Career Cyber Security Operations Center (SOC) Analyst
@ State Street | Quincy, Massachusetts