March 14, 2024, 4:11 a.m. | Zeguan Xiao, Yan Yang, Guanhua Chen, Yun Chen

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.08424v1 Announce Type: new
Abstract: Large language models (LLMs) have achieved significant advances in recent days. Extensive efforts have been made before the public release of LLMs to align their behaviors with human values. The primary goal of alignment is to ensure their helpfulness, honesty and harmlessness. However, even meticulously aligned LLMs remain vulnerable to malicious manipulations such as jailbreaking, leading to unintended behaviors. The jailbreak is to intentionally develop a malicious prompt that escapes from the LLM security restrictions …

alignment arxiv attack automatic cs.ai cs.cl cs.cr goal human human values jailbreak language language models large llms public release

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Technical Support Specialist (Cyber Security)

@ Sigma Software | Warsaw, Poland

OT Security Specialist

@ Adani Group | AHMEDABAD, GUJARAT, India

FS-EGRC-Manager-Cloud Security

@ EY | Bengaluru, KA, IN, 560048