all InfoSec news
The Adversarial AI-Art: Understanding, Generation, Detection, and Benchmarking
April 24, 2024, 4:11 a.m. | Yuying Li, Zeyan Liu, Junyi Zhao, Liangqin Ren, Fengjun Li, Jiebo Luo, Bo Luo
cs.CR updates on arXiv.org arxiv.org
Abstract: Generative AI models can produce high-quality images based on text prompts. The generated images often appear indistinguishable from images generated by conventional optical photography devices or created by human artists (i.e., real images). While the outstanding performance of such generative models is generally well received, security concerns arise. For instance, such image generators could be used to facilitate fraud or scam schemes, generate and spread misinformation, or produce fabricated artworks. In this paper, we present …
adversarial adversarial ai ai models art arxiv benchmarking can cs.ai cs.cr cs.cv detection devices generated generative generative ai generative models high human images optical performance photography prompts quality real security security concerns text understanding
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
GRC Analyst
@ Richemont | Shelton, CT, US
Security Specialist
@ Peraton | Government Site, MD, United States
Information Assurance Security Specialist (IASS)
@ OBXtek Inc. | United States
Cyber Security Technology Analyst
@ Airbus | Bengaluru (Airbus)
Vice President, Cyber Operations Engineer
@ BlackRock | LO9-London - Drapers Gardens