April 5, 2024, 4:10 a.m. | Weidi Luo, Siyuan Ma, Xiaogeng Liu, Xiaoyu Guo, Chaowei Xiao

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.03027v1 Announce Type: new
Abstract: With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while aligning them with human values has emerged as a critical challenge. In this paper, we investigate an important and unexplored question of whether techniques that successfully jailbreak Large Language Models (LLMs) can be equally effective in jailbreaking MLLMs. To explore this issue, we introduce JailBreakV-28K, a pioneering benchmark designed to assess the transferability of LLM jailbreak techniques to …

arxiv attacks benchmark challenge critical cs.ai cs.cl cs.cr human human values important inputs jailbreak language language models large malicious mllms multimodal question rapid robustness

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Technical Support Specialist (Cyber Security)

@ Sigma Software | Warsaw, Poland

OT Security Specialist

@ Adani Group | AHMEDABAD, GUJARAT, India

FS-EGRC-Manager-Cloud Security

@ EY | Bengaluru, KA, IN, 560048