all InfoSec news
LLM Prompt Injection Worm
Schneier on Security www.schneier.com
Researchers have demonstrated a worm that spreads through prompt injection. Details:
In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, …
academic papers adversarial artificial intelligence assistant attackers data database email injection instance llm llms malware prompt prompt injection rag researchers system text worm