Jan. 31, 2024, 2:10 a.m. | Wenjie Qu, Dong Yin, Zixin He, Wei Zou, Tianyang Tao, Jinyuan Jia, Jiaheng Zhang

cs.CR updates on arXiv.org arxiv.org

Large Language Models (LLMs) have been widely deployed for their remarkable
capability to generate texts resembling human language. However, they could be
misused by criminals to create deceptive content, such as fake news and
phishing emails, which raises ethical concerns. Watermarking is a key technique
to mitigate the misuse of LLMs, which embeds a watermark (e.g., a bit string)
into a text generated by a LLM. Consequently, this enables the detection of
texts generated by a LLM as well as …

arxiv code criminals emails error ethical fake fake news generated human key language language models large llms phishing phishing emails text texts watermarking

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior Security Researcher, SIEM

@ Huntress | Remote Canada

Senior Application Security Engineer

@ Revinate | San Francisco Bay Area

Cyber Security Manager

@ American Express Global Business Travel | United States - New York - Virtual Location

Incident Responder Intern

@ Bentley Systems | Remote, PA, US

SC2024-003533 Senior Online Vulnerability Assessment Analyst (CTS) - THU 9 May

@ EMW, Inc. | Mons, Wallonia, Belgium