all InfoSec news
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
April 3, 2024, 4:11 a.m. | Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
cs.CR updates on arXiv.org arxiv.org
Abstract: We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we initially design an adversarial prompt template (sometimes adapted to the target LLM), and then we apply random search on a suffix to maximize the target logprob (e.g., of the token "Sure"), potentially with multiple restarts. In this way, we achieve nearly 100\% attack success rate …
arxiv attacks cs.ai cs.cr cs.lg jailbreaking llms safety simple stat.ml
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Principal Security Engineer
@ Elsevier | Home based-Georgia
Infrastructure Compliance Engineer
@ NVIDIA | US, CA, Santa Clara
Information Systems Security Engineer (ISSE) / Cybersecurity SME
@ Green Cell Consulting | Twentynine Palms, CA, United States
Sales Security Analyst
@ Everbridge | Bengaluru
Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France
@ Sopra Steria | Courbevoie, France
Third Party Cyber Risk Analyst
@ Chubb | Philippines