all InfoSec news
Pruning for Protection: Increasing Jailbreak Resistance in Aligned LLMs Without Fine-Tuning
April 30, 2024, 4:11 a.m. | Adib Hasan, Ileana Rugina, Alex Wang
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) are susceptible to `jailbreaking' prompts, which can induce the generation of harmful content. This paper demonstrates that moderate WANDA pruning (Sun et al., 2023) can increase their resistance to such attacks without the need for fine-tuning, while maintaining performance on standard benchmarks. Our findings suggest that the benefits of pruning correlate with the initial safety levels of the model, indicating a regularizing effect of WANDA pruning. We introduce a dataset of …
arxiv attacks can cs.ai cs.cl cs.cr cs.lg fine-tuning jailbreak jailbreaking language language models large llms performance prompts protection
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Technical Support Specialist (Cyber Security)
@ Sigma Software | Warsaw, Poland
OT Security Specialist
@ Adani Group | AHMEDABAD, GUJARAT, India
FS-EGRC-Manager-Cloud Security
@ EY | Bengaluru, KA, IN, 560048