all InfoSec news
Prompt Injection attack against LLM-integrated Applications
March 5, 2024, 3:12 p.m. | Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Zihao Wang, Xiaofeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, …
applications arxiv attack attacks complexities cs.ai cs.cl cs.cr cs.se ecosystem injection injection attack injection attacks language language models large llm llms prompt prompt injection prompt injection attacks risks security security risks services study
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Red Team Operator
@ JPMorgan Chase & Co. | LONDON, United Kingdom
SOC Analyst
@ Resillion | Bengaluru, India
Director of Cyber Security
@ Revinate | San Francisco Bay Area
Jr. Security Incident Response Analyst
@ Kaseya | Miami, Florida, United States
Infrastructure Vulnerability Consultant - (Cloud Security , CSPM)
@ Blue Yonder | Hyderabad
Product Security Lead
@ Lely | Maassluis, Netherlands