all InfoSec news
Baseline Defenses for Adversarial Attacks Against Aligned Language Models. (arXiv:2309.00614v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
As Large Language Models quickly become ubiquitous, their security
vulnerabilities are critical to understand. Recent work shows that text
optimizers can produce jailbreaking prompts that bypass moderation and
alignment. Drawing from the rich body of work on adversarial machine learning,
we approach these attacks with three questions: What threat models are
practically useful in this domain? How do baseline defense techniques perform
in this new domain? How does LLM security differ from computer vision?
We evaluate several baseline defense strategies …
adversarial adversarial attacks alignment attacks body bypass critical drawing jailbreaking language language models large machine machine learning moderation prompts questions quickly security text threat understand vulnerabilities work