all InfoSec news
Defending Against Indirect Prompt Injection Attacks With Spotlighting
March 25, 2024, 4:11 a.m. | Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, Emre Kiciman
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs), while powerful, are built and trained to process a single text input. In common applications, multiple inputs can be processed by concatenating them together into a single stream of text. However, the LLM is unable to distinguish which sections of prompt belong to various input sources. Indirect prompt injection attacks take advantage of this vulnerability by embedding adversarial instructions into untrusted data being processed alongside user commands. Often, the LLM will …
applications arxiv attacks can cs.cl cs.cr cs.lg defending injection injection attacks input inputs language language models large llm llms process processed prompt prompt injection prompt injection attacks single stream text
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 9 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 9 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 9 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Officer Hospital Laguna Beach
@ Allied Universal | Laguna Beach, CA, United States
Sr. Cloud DevSecOps Engineer
@ Oracle | NOIDA, UTTAR PRADESH, India
Cloud Operations Security Engineer
@ Elekta | Crawley - Cornerstone
Cybersecurity – Senior Information System Security Manager (ISSM)
@ Boeing | USA - Seal Beach, CA
Engineering -- Tech Risk -- Security Architecture -- VP -- Dallas
@ Goldman Sachs | Dallas, Texas, United States