April 5, 2024, 4:10 a.m. | Weidi Luo, Siyuan Ma, Xiaogeng Liu, Xiaoyu Guo, Chaowei Xiao

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.03027v1 Announce Type: new
Abstract: With the rapid advancements in Multimodal Large Language Models (MLLMs), securing these models against malicious inputs while aligning them with human values has emerged as a critical challenge. In this paper, we investigate an important and unexplored question of whether techniques that successfully jailbreak Large Language Models (LLMs) can be equally effective in jailbreaking MLLMs. To explore this issue, we introduce JailBreakV-28K, a pioneering benchmark designed to assess the transferability of LLM jailbreak techniques to …

arxiv attacks benchmark challenge critical cs.ai cs.cl cs.cr human human values important inputs jailbreak language language models large malicious mllms multimodal question rapid robustness

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Premium Hub - CoE: Business Process Senior Consultant, SAP Security Role and Authorisations & GRC

@ SAP | Dublin 24, IE, D24WA02

Product Security Response Engineer

@ Intel | CRI - Belen, Heredia

Application Security Architect

@ Uni Systems | Brussels, Brussels, Belgium

Sr Product Security Engineer

@ ServiceNow | Hyderabad, India

Analyst, Cybersecurity & Technology (Initial Application Deadline May 20th, Final Deadline May 31st)

@ FiscalNote | United Kingdom (UK)