all InfoSec news
On the Learnability of Watermarks for Language Models
May 3, 2024, 4:16 a.m. | Chenchen Gu, Xiang Lisa Li, Percy Liang, Tatsunori Hashimoto
cs.CR updates on arXiv.org arxiv.org
Abstract: Watermarking of language model outputs enables statistical detection of model-generated text, which can mitigate harms and misuses of language models. Existing watermarking strategies operate by altering the decoder of an existing language model. In this paper, we ask whether language models can directly learn to generate watermarked text, which would have significant implications for the real-world deployment of watermarks. First, learned watermarks could be used to build open models that naturally generate watermarked text, enabling …
arxiv ask can cs.cl cs.cr cs.lg decoder detection generated language language models learn strategies text watermarking watermarks
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368
@ Allen Integrated Solutions | Chantilly, Virginia, United States
Consultant Sécurité SI H/F Gouvernance - Risques - Conformité
@ Hifield | Sèvres, France
Infrastructure Consultant
@ Telefonica Tech | Belfast, United Kingdom