all InfoSec news
Goal-guided Generative Prompt Injection Attack on Large Language Models
April 12, 2024, 4:10 a.m. | Chong Zhang, Mingyu Jin, Qinkai Yu, Chengzhi Liu, Haochen Xue, Xiaobo Jin
cs.CR updates on arXiv.org arxiv.org
Abstract: Current large language models (LLMs) provide a strong foundation for large-scale user-oriented natural language tasks. A large number of users can easily inject adversarial text or instructions through the user interface, thus causing LLMs model security challenges. Although there is currently a large amount of research on prompt injection attacks, most of these black-box attacks use heuristic strategies. It is unclear how these heuristic strategies relate to the success rate of attacks and thus effectively …
adversarial arxiv attack can challenges cs.ai cs.cl cs.cr current foundation generative goal inject injection injection attack instructions interface language language models large llms natural natural language prompt prompt injection research scale security security challenges text user interface
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Sr. Staff Firmware Engineer – Networking & Firewall
@ Axiado | Bengaluru, India
Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)
@ SAP | Walldorf, DE, 69190
SAP Security Administrator
@ FARO Technologies | EMEA-Portugal