all InfoSec News
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
July 2, 2024, 4:14 a.m. | Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi
cs.CR updates on arXiv.org arxiv.org
Abstract: The interactive use of large language models (LLMs) in AI assistants (at work, home, etc.) introduces a new set of inference-time privacy risks: LLMs are fed different types of information from multiple sources in their inputs and are expected to reason about what to share in their outputs, for what purpose and with whom, within a given context. In this work, we draw attention to the highly critical yet overlooked notion of contextual …
ai assistants arxiv can contextual cs.ai cs.cl cs.cr etc fed home information inputs integrity language language models large llms privacy privacy risks risks secret testing theory types work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Software Java Developer
@ Swiss Re | Madrid, M, ES
Product Owner (Hybrid) - 19646
@ HII | Fort Belvoir, VA, Virginia, United States
Sr. Operations Research Analyst
@ HII | Albuquerque, NM, New Mexico, United States
Lead SME Platform Architect
@ General Dynamics Information Technology | USA VA Falls Church - 3150 Fairview Park Dr (VAS095)
DevOps Engineer (Hybrid) - 19526
@ HII | San Antonio, TX, Texas, United States
Cloud Platform Engineer (Hybrid) - 19535
@ HII | Greer, SC, South Carolina, United States