all InfoSec news
InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents
March 6, 2024, 5:11 a.m. | Qiusi Zhan, Zhixiang Liang, Zifan Ying, Daniel Kang
cs.CR updates on arXiv.org arxiv.org
Abstract: Recent work has embodied LLMs as agents, allowing them to access tools, perform actions, and interact with external content (e.g., emails or websites). However, external content introduces the risk of indirect prompt injection (IPI) attacks, where malicious instructions are embedded within the content processed by LLMs, aiming to manipulate these agents into executing detrimental actions against users. Given the potentially severe consequences of such attacks, establishing benchmarks to assess and mitigate these risks is imperative. …
access actions arxiv attacks benchmarking cs.cl cs.cr emails embedded external injection language large large language model llms malicious processed prompt prompt injection risk tool tools websites work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)