all InfoSec news
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
April 3, 2024, 4:11 a.m. | Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li
cs.CR updates on arXiv.org arxiv.org
Abstract: The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found …
agents arxiv benefits building cs.ai cs.cr cs.hc disclosure domains ethical fair game high language large large language model llm privacy privacy concerns privacy risks respect risks understanding user privacy
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Information Security Engineer, Sr. (Container Hardening)
@ Rackner | San Antonio, TX
BaaN IV Techno-functional consultant-On-Balfour
@ Marlabs | Piscataway, US
Senior Security Analyst
@ BETSOL | Bengaluru, India
Security Operations Centre Operator
@ NEXTDC | West Footscray, Australia
Senior Network and Security Research Officer
@ University of Toronto | Toronto, ON, CA