Feb. 22, 2024, 5:11 a.m. | Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.14020v1 Announce Type: cross
Abstract: It has recently been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements. In this work, we argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking. We provide a broad overview of possible attack surfaces and attack goals. Based on a series of concrete examples, we discuss, categorize and systematize attacks that coerce varied unintended behaviors, such as misdirection, model control, …

adversarial adversarial attacks arxiv attacks can cs.cl cs.cr cs.lg jailbreak jailbreaking language language models large llms making reveal spectrum work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Principal Security Engineer

@ Activision Blizzard | Work from Home - CA

Security Engineer- Systems Integration

@ Meta | Bellevue, WA | Menlo Park, CA | New York City

Lead Security Engineer (Digital Forensic and IR Analyst)

@ Blue Yonder | Hyderabad

Senior Principal IAM Engineering Program Manager Cybersecurity

@ Providence | Redmond, WA, United States

Information Security Analyst II or III

@ Entergy | The Woodlands, Texas, United States