June 8, 2023, 1:10 a.m. | John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein

cs.CR updates on arXiv.org arxiv.org

Large language models (LLMs) are now deployed to everyday use and positioned
to produce large quantities of text in the coming decade. Machine-generated
text may displace human-written text on the internet and has the potential to
be used for malicious purposes, such as spearphishing attacks and social media
bots. Watermarking is a simple and effective strategy for mitigating such harms
by enabling the detection and documentation of LLM-generated text. Yet, a
crucial question remains: How reliable is watermarking in realistic …

attacks bots coming generated human internet language language models large llms machine malicious may media reliability social social media spearphishing text written

Offensive Security Engineering Technical Lead, Device Security

@ Google | Amsterdam, Netherlands

Senior Security Engineering Program Manager

@ Microsoft | Redmond, Washington, United States

Information System Security Analyst

@ Resource Management Concepts, Inc. | Dahlgren, Virginia, United States

Critical Facility Security Officer - Evening Shift

@ Allied Universal | Charlotte, NC, United States

Information System Security Officer, Junior

@ Resource Management Concepts, Inc. | Patuxent River, Maryland, United States

Security Engineer

@ JPMorgan Chase & Co. | Plano, TX, United States