all InfoSec news
What's in Your "Safe" Data?: Identifying Benign Data that Breaks Safety
April 2, 2024, 7:12 p.m. | Luxi He, Mengzhou Xia, Peter Henderson
cs.CR updates on arXiv.org arxiv.org
Abstract: Current Large Language Models (LLMs), even those tuned for safety and alignment, are susceptible to jailbreaking. Some have found that just further fine-tuning an aligned model with benign data (i.e., data without harmful content) surprisingly leads to substantial degradation in safety. We delve into the data-centric aspects of why benign fine-tuning inadvertently contributes to jailbreaking. First, we represent fine-tuning data through two lenses: representation and gradient spaces. Furthermore, we propose a bi-directional anchoring method that …
alignment arxiv cs.ai cs.cl cs.cr cs.lg current data fine-tuning found jailbreaking language language models large llms safe safety
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Financial Crimes Compliance - Senior - Consulting - Location Open
@ EY | New York City, US, 10001-8604
Software Engineer - Cloud Security
@ Neo4j | Malmö
Security Consultant
@ LRQA | Singapore, Singapore, SG, 119963
Identity Governance Consultant
@ Allianz | Sydney, NSW, AU, 2000
Educator, Cybersecurity
@ Brain Station | Toronto
Principal Security Engineer
@ Hippocratic AI | Palo Alto