Feb. 27, 2024, 5:11 a.m. | Qi Pang, Shengyuan Hu, Wenting Zheng, Virginia Smith

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.16187v1 Announce Type: new
Abstract: Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating misuse of such AI-generated content. However, existing watermarking schemes remain surprisingly susceptible to attack. In particular, we show that desirable properties shared by existing LLM watermarking systems such as quality preservation, robustness, …

applications arxiv code cs.cl cs.cr cs.lg exploiting generated generative generative models human images information llm mirror text verify watermarking watermarks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Compliance Advisor

@ SAP | Budapest, HU, 1031

DevSecOps Engineer

@ Qube Research & Technologies | London

Software Engineer, Security

@ Render | San Francisco, CA or Remote (USA & Canada)

Associate Consultant

@ Control Risks | Frankfurt, Hessen, Germany

Senior Security Engineer

@ Activision Blizzard | Work from Home - CA