all InfoSec news
Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code
April 17, 2024, 4:11 a.m. | Wenjie Qu, Dong Yin, Zixin He, Wei Zou, Tianyang Tao, Jinyuan Jia, Jiaheng Zhang
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have been widely deployed for their remarkable capability to generate texts resembling human language. However, they could be misused by criminals to create deceptive content, such as fake news and phishing emails, which raises ethical concerns. Watermarking is a key technique to mitigate the misuse of LLMs, which embeds a watermark (e.g., a bit string) into a text generated by a LLM. Consequently, this enables the detection of texts generated by …
arxiv code criminals cs.cr emails error ethical fake fake news generated human key language language models large llms phishing phishing emails text texts watermarking
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Threat Analysis Engineer
@ Gen | IND - Tamil Nadu, Chennai
Head of Security
@ Hippocratic AI | Palo Alto
IT Security Vulnerability Management Specialist (15.10)
@ OCT Consulting, LLC | Washington, District of Columbia, United States
Security Engineer - Netskope/Proofpoint
@ Sainsbury's | Coventry, West Midlands, United Kingdom
Journeyman Cybersecurity Analyst
@ ISYS Technologies | Kirtland AFB, NM, United States