all InfoSec news
Localizing Paragraph Memorization in Language Models
April 1, 2024, 4:11 a.m. | Niklas Stoehr, Mitchell Gordon, Chiyuan Zhang, Owen Lewis
cs.CR updates on arXiv.org arxiv.org
Abstract: Can we localize the weights and mechanisms used by a language model to memorize and recite entire paragraphs of its training data? In this paper, we show that while memorization is spread across multiple layers and model components, gradients of memorized paragraphs have a distinguishable spatial pattern, being larger in lower model layers than gradients of non-memorized examples. Moreover, the memorized examples can be unlearned by fine-tuning only the high-gradient weights. We localize a low-layer …
arxiv can components cs.cl cs.cr cs.lg data language language models stat.ml training training data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
XDR Detection Engineer
@ SentinelOne | Italy
Security Engineer L2
@ NTT DATA | A Coruña, Spain
Cyber Security Assurance Manager
@ Babcock | Portsmouth, GB, PO6 3EN
Senior Threat Intelligence Researcher
@ CloudSEK | Bengaluru, Karnataka, India
Cybersecurity Analyst 1
@ Spry Methods | Washington, DC (Hybrid)
Security Infrastructure DevOps Engineering Manager
@ Apple | Austin, Texas, United States