April 26, 2024, 4:11 a.m. | Divyansh Agarwal, Alexander R. Fabbri, Philippe Laban, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.16251v1 Announce Type: new
Abstract: Prompt leakage in large language models (LLMs) poses a significant security and privacy threat, particularly in retrieval-augmented generation (RAG) systems. However, leakage in multi-turn LLM interactions along with mitigation strategies has not been studied in a standardized manner. This paper investigates LLM vulnerabilities against prompt leakage across 4 diverse domains and 10 closed- and open-source LLMs. Our unique multi-turn threat model leverages the LLM's sycophancy effect and our analysis dissects task instruction and knowledge leakage …

arxiv box cs.ai cs.cl cs.cr defenses effect language language models large llm llms mitigation mitigation strategies privacy prompt rag security strategies systems threat turn vulnerabilities

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Sr. Staff Firmware Engineer – Networking & Firewall

@ Axiado | Bengaluru, India

Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)

@ SAP | Walldorf, DE, 69190

SAP Security Administrator

@ FARO Technologies | EMEA-Portugal