all InfoSec news
Prompt Injection attack against LLM-integrated Applications. (arXiv:2306.05499v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Large Language Models (LLMs), renowned for their superior proficiency in
language comprehension and generation, stimulate a vibrant ecosystem of
applications around them. However, their extensive assimilation into various
services introduces significant security risks. This study deconstructs the
complexities and implications of prompt injection attacks on actual
LLM-integrated applications. Initially, we conduct an exploratory analysis on
ten commercial applications, highlighting the constraints of current attack
strategies in practice. Prompted by these limitations, we subsequently
formulate HouYi, a novel black-box prompt injection …
applications attack attacks ecosystem injection injection attack injection attacks language language models large llm llms prompt injection prompt injection attacks risks security security risks services study