Jan. 1, 2024, 2:10 a.m. | Rongwu Xu, Brian S. Lin, Shujian Yang, Tianqi Zhang, Weiyan Shi, Tianwei Zhang, Zhixuan Fang, Wei Xu, Han Qiu

cs.CR updates on arXiv.org arxiv.org

Large Language Models (LLMs) encapsulate vast amounts of knowledge but still
remain vulnerable to external misinformation. Existing research mainly studied
this susceptibility behavior in a single-turn setting. However, belief can
change during a multi-turn conversation, especially a persuasive one.
Therefore, in this study, we delve into LLMs' susceptibility to persuasive
conversations, particularly on factual questions that they can answer
correctly. We first curate the Farm (i.e., Fact to Misinform) dataset, which
contains factual questions paired with systematically generated persuasive
misinformation. …

change conversation earth external knowledge language language models large llms misinformation research single study turn vast vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)

@ WWC Global | Reston, Virginia, United States

Security Architect (DevSecOps)

@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium

Infrastructure Security Architect

@ Ørsted | Kuala Lumpur, MY

Contract Penetration Tester

@ Evolve Security | United States - Remote

Senior Penetration Tester

@ DigitalOcean | Canada