Sept. 4, 2023, 1:10 a.m. | Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Go

cs.CR updates on arXiv.org arxiv.org

As Large Language Models quickly become ubiquitous, their security
vulnerabilities are critical to understand. Recent work shows that text
optimizers can produce jailbreaking prompts that bypass moderation and
alignment. Drawing from the rich body of work on adversarial machine learning,
we approach these attacks with three questions: What threat models are
practically useful in this domain? How do baseline defense techniques perform
in this new domain? How does LLM security differ from computer vision?


We evaluate several baseline defense strategies …

adversarial adversarial attacks alignment attacks body bypass critical drawing jailbreaking language language models large machine machine learning moderation prompts questions quickly security text threat understand vulnerabilities work

Senior Security Specialist, Forsah Technical and Vocational Education and Training (Forsah TVET) (NEW)

@ IREX | Ramallah, West Bank, Palestinian National Authority

Consultant(e) Junior Cybersécurité

@ Sia Partners | Paris, France

Senior Network Security Engineer

@ NielsenIQ | Mexico City, Mexico

Senior Consultant, Payment Intelligence

@ Visa | Washington, DC, United States

Corporate Counsel, Compliance

@ Okta | San Francisco, CA; Bellevue, WA; Chicago, IL; New York City; Washington, DC; Austin, TX

Security Operations Engineer

@ Samsara | Remote - US