all InfoSec news
Adversarial Illusions in Multi-Modal Embeddings
Feb. 20, 2024, 5:11 a.m. | Tingwei Zhang, Rishi Jha, Eugene Bagdasaryan, Vitaly Shmatikov
cs.CR updates on arXiv.org arxiv.org
Abstract: Multi-modal embeddings encode texts, images, sounds, videos, etc., into a single embedding space, aligning representations across different modalities (e.g., associate an image of a dog with a barking sound). In this paper, we show that multi-modal embeddings can be vulnerable to an attack we call "adversarial illusions." Given an image or a sound, an adversary can perturb it to make its embedding close to an arbitrary, adversary-chosen input in another modality.
These attacks are cross-modal …
adversarial arxiv attack call can cs.ai cs.cr cs.lg dog etc image images modal single sound space texts videos vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)