Jan. 25, 2023, 2:10 a.m. | John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

cs.CR updates on arXiv.org arxiv.org

Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
humans but algorithmically detectable from a short span of tokens. We propose a
watermarking framework for proprietary language models. The watermark can be
embedded with negligible impact on text quality, and can be detected using an
efficient open-source algorithm without access to the language model API or
parameters. The watermark works by selecting a randomized set of …

access algorithm api embedded framework generated humans impact language language models large quality signals span text tokens watermarking

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cyber Systems Administration

@ Peraton | Washington, DC, United States

Android Security Engineer, Public Sector

@ Google | Reston, VA, USA

Lead Electronic Security Engineer, CPP - Federal Facilities - Hybrid

@ Black & Veatch | Denver, CO, US

Profissional Sênior de Compliance & Validação em TI - Montes Claros (MG)

@ Novo Nordisk | Montes Claros, Minas Gerais, BR

Principal Engineer, Product Security Engineering

@ Google | Sunnyvale, CA, USA