all InfoSec news
Boosting Jailbreak Attack with Momentum
May 3, 2024, 4:15 a.m. | Yihao Zhang, Zeming Wei
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have achieved remarkable success across diverse tasks, yet they remain vulnerable to adversarial attacks, notably the well-documented \textit{jailbreak} attack. Recently, the Greedy Coordinate Gradient (GCG) attack has demonstrated efficacy in exploiting this vulnerability by optimizing adversarial prompts through a combination of gradient heuristics and greedy search. However, the efficiency of this attack has become a bottleneck in the attacking process. To mitigate this limitation, in this paper we rethink the generation …
arxiv attack cs.ai cs.cl cs.cr cs.lg jailbreak math.oc momentum
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg
@ Hifield | Strasbourg, France
Lead Security Specialist
@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas
Consultant SOC / CERT H/F
@ Hifield | Sèvres, France