March 9, 2023, 2:10 a.m. | Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, Kai-Wei Chang

cs.CR updates on arXiv.org arxiv.org

Multimodal contrastive pretraining has been used to train multimodal
representation models, such as CLIP, on large amounts of paired image-text
data. However, previous studies have revealed that such models are vulnerable
to backdoor attacks. Specifically, when trained on backdoored examples, CLIP
learns spurious correlations between the embedded backdoor trigger and the
target label, aligning their representations in the joint embedding space.
Injecting even a small number of poisoned examples, such as 75 examples in 3
million pretraining data, can significantly …

attacks backdoor backdoor attacks data data poisoning embedded large poisoning representation space studies target text train trigger vulnerable

Security Specialist

@ Nestlé | St. Louis, MO, US, 63164

Cybersecurity Analyst

@ Dana Incorporated | Pune, MH, IN, 411057

Sr. Application Security Engineer

@ CyberCube | United States

Linux DevSecOps Administrator (Remote)

@ Accenture Federal Services | Arlington, VA

Cyber Security Intern or Co-op

@ Langan | Parsippany, NJ, US, 07054-2172

Security Advocate - Application Security

@ Datadog | New York, USA, Remote