June 12, 2023, 1:10 a.m. | Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

cs.CR updates on arXiv.org arxiv.org

Large Language Models (LLMs), renowned for their superior proficiency in
language comprehension and generation, stimulate a vibrant ecosystem of
applications around them. However, their extensive assimilation into various
services introduces significant security risks. This study deconstructs the
complexities and implications of prompt injection attacks on actual
LLM-integrated applications. Initially, we conduct an exploratory analysis on
ten commercial applications, highlighting the constraints of current attack
strategies in practice. Prompted by these limitations, we subsequently
formulate HouYi, a novel black-box prompt injection …

applications attack attacks ecosystem injection injection attack injection attacks language language models large llm llms prompt injection prompt injection attacks risks security security risks services study

Expert Global Security Solutions Specialist

@ CHS Inc. | Inver Grove Heights, MN, US, 55077-1721

Security Operations Senior Associate - Perimeter Response

@ JPMorgan Chase & Co. | Houston, TX, United States

Cybersecurity Engineer IV

@ ManTech | 203O - CustomerSite,Washington,DC

Senior Site Reliability Engineer - Security

@ Klaviyo | Boston, MA

Information Security Specialist (Cloud Security)

@ Vertiv | Philippines

Business Value Consultant

@ Sumo Logic | United States