all InfoSec news
Vocabulary Attack to Hijack Large Language Model Applications
April 4, 2024, 4:10 a.m. | Patrick Levi, Christoph P. Neumann
cs.CR updates on arXiv.org arxiv.org
Abstract: The fast advancements in Large Language Models (LLMs) are driving an increasing number of applications. Together with the growing number of users, we also see an increasing number of attackers who try to outsmart these systems. They want the model to reveal confidential information, specific false information, or offensive behavior. To this end, they manipulate their instructions for the LLM by inserting separators or rephrasing them systematically until they reach their goal. Our approach is …
applications arxiv attack attackers confidential cs.ai cs.cr cs.dc driving fast hijack information language language models large large language model llms reveal systems try vocabulary
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
DevSecOps Engineer
@ LinQuest | Beavercreek, Ohio, United States
Senior Developer, Vulnerability Collections (Contractor)
@ SecurityScorecard | Remote (Turkey or Latin America)
Cyber Security Intern 03416 NWSOL
@ North Wind Group | RICHLAND, WA
Senior Cybersecurity Process Engineer
@ Peraton | Fort Meade, MD, United States
Sr. Manager, Cybersecurity and Info Security
@ AESC | Smyrna, TN 37167, Smyrna, TN, US | Santa Clara, CA 95054, Santa Clara, CA, US | Florence, SC 29501, Florence, SC, US | Bowling Green, KY 42101, Bowling Green, KY, US