April 26, 2024, 4:11 a.m. | Divyansh Agarwal, Alexander R. Fabbri, Philippe Laban, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.16251v1 Announce Type: new
Abstract: Prompt leakage in large language models (LLMs) poses a significant security and privacy threat, particularly in retrieval-augmented generation (RAG) systems. However, leakage in multi-turn LLM interactions along with mitigation strategies has not been studied in a standardized manner. This paper investigates LLM vulnerabilities against prompt leakage across 4 diverse domains and 10 closed- and open-source LLMs. Our unique multi-turn threat model leverages the LLM's sycophancy effect and our analysis dissects task instruction and knowledge leakage …

arxiv box cs.ai cs.cl cs.cr defenses effect language language models large llm llms mitigation mitigation strategies privacy prompt rag security strategies systems threat turn vulnerabilities

PMO Cybersécurité H/F

@ Hifield | Sèvres, France

Third Party Risk Management - Consultant

@ KPMG India | Bengaluru, Karnataka, India

Consultant Cyber Sécurité H/F - Strasbourg

@ Hifield | Strasbourg, France

Information Security Compliance Analyst

@ KPMG Australia | Melbourne, Australia

GDS Consulting - Cyber Security | Data Protection Senior Consultant

@ EY | Taguig, PH, 1634

Senior QA Engineer - Cloud Security

@ Tenable | Israel