all InfoSec news
Goal-guided Generative Prompt Injection Attack on Large Language Models
April 12, 2024, 4:10 a.m. | Chong Zhang, Mingyu Jin, Qinkai Yu, Chengzhi Liu, Haochen Xue, Xiaobo Jin
cs.CR updates on arXiv.org arxiv.org
Abstract: Current large language models (LLMs) provide a strong foundation for large-scale user-oriented natural language tasks. A large number of users can easily inject adversarial text or instructions through the user interface, thus causing LLMs model security challenges. Although there is currently a large amount of research on prompt injection attacks, most of these black-box attacks use heuristic strategies. It is unclear how these heuristic strategies relate to the success rate of attacks and thus effectively …
adversarial arxiv attack can challenges cs.ai cs.cl cs.cr current foundation generative goal inject injection injection attack instructions interface language language models large llms natural natural language prompt prompt injection research scale security security challenges text user interface
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Information Security Engineer, Sr. (Container Hardening)
@ Rackner | San Antonio, TX
BaaN IV Techno-functional consultant-On-Balfour
@ Marlabs | Piscataway, US
Senior Security Analyst
@ BETSOL | Bengaluru, India
Security Operations Centre Operator
@ NEXTDC | West Footscray, Australia
Senior Network and Security Research Officer
@ University of Toronto | Toronto, ON, CA