all InfoSec news
Can LLM-Generated Misinformation Be Detected?
March 19, 2024, 4:11 a.m. | Canyu Chen, Kai Shu
cs.CR updates on arXiv.org arxiv.org
Abstract: The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and …
arxiv can chatgpt cs.ai cs.cl cs.cr cs.hc cs.lg exploited generated harm human impact language language models large llm llms misinformation online safety public question research safety serious trust written
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 15 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 15 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 15 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Cybersecurity Engineer
@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client
Invoice Compliance Reviewer
@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote
Technical Program Manager II - Compliance
@ Microsoft | Redmond, Washington, United States
Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence
@ Moonshot | Washington, District of Columbia, United States
Customer Engineer, Security, Public Sector
@ Google | Virginia, USA; Illinois, USA