June 4, 2024, 4:11 a.m. | Ziqian Zeng, Jianwei Wang, Zhengdong Lu, Huiping Zhuang, Cen Chen

cs.CR updates on arXiv.org arxiv.org

arXiv:2406.01394v1 Announce Type: new
Abstract: The widespread usage of online Large Language Models (LLMs) inference services has raised significant privacy concerns about the potential exposure of private information in user inputs to eavesdroppers or untrustworthy service providers. Existing privacy protection methods for LLMs suffer from insufficient privacy protection, performance degradation, or severe inference time overhead. In this paper, we propose PrivacyRestore to protect the privacy of user inputs during LLM inference. PrivacyRestore directly removes privacy spans in user inputs and …

arxiv cs.ai cs.cr exposure information inputs language language models large llms privacy privacy concerns private protection restoration service service providers services

Watch Officer and Operations Officer

@ Interclypse | Arlington, VA, US

Sales Development Representative

@ Devo | United States

Principal Software Engineer

@ Oracle | Seattle, WA, United States

Engineering Manager, Cloud - TDIR (Remote)

@ CrowdStrike | USA CA Remote

Linux System Administrator II

@ Peraton | Fort Meade, MD, United States

Linux System Administrator

@ Peraton | Fort Meade, MD, United States