all InfoSec news
Mitigating Approximate Memorization in Language Models via Dissimilarity Learned Policy. (arXiv:2305.01550v1 [cs.CL])
cs.CR updates on arXiv.org arxiv.org
Large Language models (LLMs) are trained on large amounts of data, which can
include sensitive information that may compromise per- sonal privacy. LLMs
showed to memorize parts of the training data and emit those data verbatim when
an adversary prompts appropriately. Previous research has primarily focused on
data preprocessing and differential privacy techniques to address memorization
or prevent verbatim memorization exclusively, which can give a false sense of
privacy. However, these methods rely on explicit and implicit assumptions about
the …
adversary compromise data information language language models large llms may policy privacy prompts research sensitive information training verbatim