April 5, 2024, 4:11 a.m. | Shuo Chen, Zhen Han, Bailan He, Zifeng Ding, Wenqian Yu, Philip Torr, Volker Tresp, Jindong Gu

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.03411v1 Announce Type: cross
Abstract: Various jailbreak attacks have been proposed to red-team Large Language Models (LLMs) and revealed the vulnerable safeguards of LLMs. Besides, some methods are not limited to the textual modality and extend the jailbreak attack to Multimodal Large Language Models (MLLMs) by perturbing the visual input. However, the absence of a universal evaluation benchmark complicates the performance reproduction and fair comparison. Besides, there is a lack of comprehensive evaluation of closed-source state-of-the-art (SOTA) models, especially MLLMs, …

arxiv attacks cs.cl cs.cr cs.lg gpt jailbreak modal red teaming safe

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg

@ Hifield | Strasbourg, France

Lead Security Specialist

@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France