all InfoSec news
Coercing LLMs to do and reveal (almost) anything
Feb. 22, 2024, 5:11 a.m. | Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein
cs.CR updates on arXiv.org arxiv.org
Abstract: It has recently been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements. In this work, we argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking. We provide a broad overview of possible attack surfaces and attack goals. Based on a series of concrete examples, we discuss, categorize and systematize attacks that coerce varied unintended behaviors, such as misdirection, model control, …
adversarial adversarial attacks arxiv attacks can cs.cl cs.cr cs.lg jailbreak jailbreaking language language models large llms making reveal spectrum work
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 4 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 4 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 4 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Principal Security Engineer
@ Activision Blizzard | Work from Home - CA
Security Engineer- Systems Integration
@ Meta | Bellevue, WA | Menlo Park, CA | New York City
Lead Security Engineer (Digital Forensic and IR Analyst)
@ Blue Yonder | Hyderabad
Senior Principal IAM Engineering Program Manager Cybersecurity
@ Providence | Redmond, WA, United States
Information Security Analyst II or III
@ Entergy | The Woodlands, Texas, United States