April 2, 2024, 7:12 p.m. | Jie Ren, Han Xu, Yiding Liu, Yingqian Cui, Shuaiqiang Wang, Dawei Yin, Jiliang Tang

cs.CR updates on arXiv.org arxiv.org

arXiv:2311.08721v2 Announce Type: replace
Abstract: Large language models (LLMs) have show great ability in various natural language tasks. However, there are concerns that LLMs are possible to be used improperly or even illegally. To prevent the malicious usage of LLMs, detecting LLM-generated text becomes crucial in the deployment of LLM applications. Watermarking is an effective strategy to detect the LLM-generated content by encoding a pre-defined secret watermark to facilitate the detection process. However, the majority of existing watermark methods leverage …

arxiv cs.cr deployment generated great language language models large large language model llm llms malicious natural natural language prevent text

EY- GDS- Cybersecurity- Staff

@ EY | Miguel Hidalgo, MX, 11520

Staff Security Operations Engineer

@ Workiva | Ames

Public Relations Senior Account Executive (B2B Tech/Cybersecurity/Enterprise)

@ Highwire Public Relations | Los Angeles, CA

Airbus Canada - Responsable Cyber sécurité produit / Product Cyber Security Responsible

@ Airbus | Mirabel

Investigations (OSINT) Manager

@ Logically | India

Security Engineer I, Offensive Security Penetration Testing

@ Amazon.com | US, NY, Virtual Location - New York