all InfoSec news
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
April 29, 2024, 4:11 a.m. | Quan Zhang, Binqi Zeng, Chijin Zhou, Gwihwan Go, Heyuan Shi, Yu Jiang
cs.CR updates on arXiv.org arxiv.org
Abstract: Presently, with the assistance of advanced LLM application development frameworks, more and more LLM-powered applications can effortlessly augment the LLMs' knowledge with external content using the retrieval augmented generation (RAG) technique. However, these frameworks' designs do not have sufficient consideration of the risk of external content, thereby allowing attackers to undermine the applications developed with these frameworks. In this paper, we reveal a new threat to LLM-powered applications, termed retrieval poisoning, where attackers can guide …
advanced application application development applications arxiv assistance attacks can cs.ai cs.cr development external frameworks human knowledge llm llms poisoning poisoning attacks rag risk
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Security Operations Manager-West Coast
@ The Walt Disney Company | USA - CA - 2500 Broadway Street
Vulnerability Analyst - Remote (WFH)
@ Cognitive Medical Systems | Phoenix, AZ, US | Oak Ridge, TN, US | Austin, TX, US | Oregon, US | Austin, TX, US
Senior Mainframe Security Administrator
@ Danske Bank | Copenhagen V, Denmark