April 29, 2024, 4:11 a.m. | Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.16873v1 Announce Type: new
Abstract: While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain jailbreaking attacks that lead to generation of inappropriate or harmful content. Manual red-teaming requires finding adversarial prompts that cause such jailbreaking, e.g. by appending a suffix to a given instruction, which is inefficient and time-consuming. On the other hand, automatic adversarial prompt generation often leads to semantically meaningless attacks that can easily be detected by perplexity-based filters, may require gradient …

adversarial arxiv attacks cs.ai cs.cl cs.cr cs.lg fast jailbreaking language language models large llms prompts vulnerable

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Sr. Staff Firmware Engineer – Networking & Firewall

@ Axiado | Bengaluru, India

Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)

@ SAP | Walldorf, DE, 69190

SAP Security Administrator

@ FARO Technologies | EMEA-Portugal