all InfoSec news
A Watermark for Large Language Models. (arXiv:2301.10226v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
humans but algorithmically detectable from a short span of tokens. We propose a
watermarking framework for proprietary language models. The watermark can be
embedded with negligible impact on text quality, and can be detected using an
efficient open-source algorithm without access to the language model API or
parameters. The watermark works by selecting a randomized set of …
access algorithm api embedded framework generated humans impact language language models large quality signals span text tokens watermarking