March 25, 2024, 4:11 a.m. | Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, Emre Kiciman

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.14720v1 Announce Type: new
Abstract: Large Language Models (LLMs), while powerful, are built and trained to process a single text input. In common applications, multiple inputs can be processed by concatenating them together into a single stream of text. However, the LLM is unable to distinguish which sections of prompt belong to various input sources. Indirect prompt injection attacks take advantage of this vulnerability by embedding adversarial instructions into untrusted data being processed alongside user commands. Often, the LLM will …

applications arxiv attacks can cs.cl cs.cr cs.lg defending injection injection attacks input inputs language language models large llm llms process processed prompt prompt injection prompt injection attacks single stream text

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Officer Hospital Laguna Beach

@ Allied Universal | Laguna Beach, CA, United States

Sr. Cloud DevSecOps Engineer

@ Oracle | NOIDA, UTTAR PRADESH, India

Cloud Operations Security Engineer

@ Elekta | Crawley - Cornerstone

Cybersecurity – Senior Information System Security Manager (ISSM)

@ Boeing | USA - Seal Beach, CA

Engineering -- Tech Risk -- Security Architecture -- VP -- Dallas

@ Goldman Sachs | Dallas, Texas, United States