all InfoSec news
Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code. (arXiv:2401.16820v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Large Language Models (LLMs) have been widely deployed for their remarkable
capability to generate texts resembling human language. However, they could be
misused by criminals to create deceptive content, such as fake news and
phishing emails, which raises ethical concerns. Watermarking is a key technique
to mitigate the misuse of LLMs, which embeds a watermark (e.g., a bit string)
into a text generated by a LLM. Consequently, this enables the detection of
texts generated by a LLM as well as …
arxiv code criminals emails error ethical fake fake news generated human key language language models large llms phishing phishing emails text texts watermarking