Web: http://arxiv.org/abs/2301.10226

Jan. 25, 2023, 2:10 a.m. | John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein

cs.CR updates on arXiv.org arxiv.org

Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
humans but algorithmically detectable from a short span of tokens. We propose a
watermarking framework for proprietary language models. The watermark can be
embedded with negligible impact on text quality, and can be detected using an
efficient open-source algorithm without access to the language model API or
parameters. The watermark works by selecting a randomized set of …

language language models large

Chief Information Security Officer

@ Los Angeles Unified School District | Los Angeles

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Enterprise Security Engineer

@ Greenlight | Bengaluru, Karnataka

Security Solution Administrator - Antivirus Operation (REF565Y)

@ Deutsche Telekom IT Solutions | Budapest, Pécs, Debrecen, Szeged, Hungary

IT Infrastructure Engineer - Cloud Security Administration and Consulting

@ Bosch Group | Warszawa, Poland

Embedded Software Development und Cyber Security Experte (m/w/div)

@ Bosch Group | Leinfelden-Echterdingen, Germany

Senior Cybersecurity Architect

@ McDonald's Corporation | Chicago, IL, United States

Security Engineering Student Worker

@ Mozilla | Remote

Cybersecurity Masters Intern

@ Visa | Ashburn, VA, United States