all InfoSec news
Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models. (arXiv:2211.02408v1 [cs.LG])
Nov. 7, 2022, 2:20 a.m. | Lukas Struppek, Dominik Hintersdorf, Kristian Kersting
cs.CR updates on arXiv.org arxiv.org
While text-to-image synthesis currently enjoys great popularity among
researchers and the general public, the security of these models has been
neglected so far. Many text-guided image generation models rely on pre-trained
text encoders from external sources, and their users trust that the retrieved
models will behave as promised. Unfortunately, this might not be the case. We
introduce backdoor attacks against text-guided generative models and
demonstrate that their text encoders pose a major tampering risk. Our attacks
only slightly alter an …
More from arxiv.org / cs.CR updates on arXiv.org
Proactive Detection of Voice Cloning with Localized Watermarking
2 days, 17 hours ago |
arxiv.org
NFT Wash Trading: Direct vs. Indirect Estimation
2 days, 17 hours ago |
arxiv.org
Backdoor Attack with Sparse and Invisible Trigger
2 days, 17 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Senior Security Researcher - Linux MacOS EDR (Cortex)
@ Palo Alto Networks | Tel Aviv-Yafo, Israel
Sr. Manager, NetSec GTM Programs
@ Palo Alto Networks | Santa Clara, CA, United States
SOC Analyst I
@ Fortress Security Risk Management | Cleveland, OH, United States