March 21, 2024, 4:10 a.m. | Chaoyi Zhu, Jeroen Galjaard, Pin-Yu Chen, Lydia Y. Chen

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.13000v1 Announce Type: cross
Abstract: As large language models (LLM) are increasingly used for text generation tasks, it is critical to audit their usages, govern their applications, and mitigate their potential harms. Existing watermark techniques are shown effective in embedding single human-imperceptible and machine-detectable patterns without significantly affecting generated text quality and semantics. However, the efficiency in detecting watermarks, i.e., the minimum number of tokens required to assert detection with significance and robustness against post-editing, is still debatable. In this …

applications arxiv audit critical cs.ai cs.cl cs.cr cs.lg generated govern human language language models large llm machine patterns quality single techniques text watermarks

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens