March 4, 2024, 12:01 p.m. | Bruce Schneier

Schneier on Security www.schneier.com

Researchers have demonstrated a worm that spreads through prompt injection. Details:


In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, …

academic papers adversarial artificial intelligence assistant attackers data database email injection instance llm llms malware prompt prompt injection rag researchers system text worm

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Network Security Engineer

@ Meta | Menlo Park, CA | Remote, US

Security Engineer, Investigations - i3

@ Meta | Washington, DC

Threat Investigator- Security Analyst

@ Meta | Menlo Park, CA | Seattle, WA | Washington, DC

Security Operations Engineer II

@ Microsoft | Redmond, Washington, United States

Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Bug Bounty -- Associate -- Dallas

@ Goldman Sachs | Dallas, Texas, United States