March 19, 2024, 4:11 a.m. | Canyu Chen, Kai Shu

cs.CR updates on arXiv.org arxiv.org

arXiv:2309.13788v3 Announce Type: replace-cross
Abstract: The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and …

arxiv can chatgpt cs.ai cs.cl cs.cr cs.hc cs.lg exploited generated harm human impact language language models large llm llms misinformation online safety public question research safety serious trust written

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cybersecurity Engineer

@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client

Invoice Compliance Reviewer

@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote

Technical Program Manager II - Compliance

@ Microsoft | Redmond, Washington, United States

Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence

@ Moonshot | Washington, District of Columbia, United States

Customer Engineer, Security, Public Sector

@ Google | Virginia, USA; Illinois, USA