all InfoSec news
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Feb. 20, 2024, 5:11 a.m. | Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao
cs.CR updates on arXiv.org arxiv.org
Abstract: Data poisoning attacks manipulate training data to introduce unexpected behaviors into machine learning models at training time. For text-to-image generative models with massive training datasets, current understanding of poisoning attacks suggests that a successful attack would require injecting millions of poison samples into their training pipeline. In this paper, we show that poisoning attacks can be successful on generative models. We observe that training data per concept can be quite limited in these models, making …
arxiv attack attacks cs.ai cs.cr current data data poisoning datasets generative generative models image machine machine learning machine learning models millions pipeline poisoning poisoning attacks prompt text training training data understanding
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Associate Manager, BPT Infrastructure & Ops (Security Engineer)
@ SC Johnson | PHL - Makati
Cybersecurity Analyst - Project Bound
@ NextEra Energy | Jupiter, FL, US, 33478
Lead Cyber Security Operations Center (SOC) Analyst
@ State Street | Quincy, Massachusetts
Junior Information Security Coordinator (Internship)
@ Garrison Technology | London, Waterloo, England, United Kingdom
Sr. Security Engineer
@ ScienceLogic | Reston, VA