March 19, 2024, 4:11 a.m. | Canyu Chen, Kai Shu

cs.CR updates on arXiv.org arxiv.org

arXiv:2309.13788v3 Announce Type: replace-cross
Abstract: The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and …

arxiv can chatgpt cs.ai cs.cl cs.cr cs.hc cs.lg exploited generated harm human impact language language models large llm llms misinformation online safety public question research safety serious trust written

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)