all InfoSec news
Talk Too Much: Poisoning Large Language Models under Token Limit
April 24, 2024, 4:11 a.m. | Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li
cs.CR updates on arXiv.org arxiv.org
Abstract: Mainstream poisoning attacks on large language models (LLMs) typically set a fixed trigger in the input instance and specific responses for triggered queries. However, the fixed trigger setting (e.g., unusual words) may be easily detected by human detection, limiting the effectiveness and practicality in real-world scenarios. To enhance the stealthiness of the trigger, we present a poisoning attack against LLMs that is triggered by a generation/output condition-token limitation, which is a commonly adopted strategy by …
arxiv attacks cs.cl cs.cr cs.lg detection human input instance language language models large limit llms mainstream may poisoning poisoning attacks real token trigger under world
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
QA Customer Response Engineer
@ ORBCOMM | Sterling, VA Office, Sterling, VA, US
Enterprise Security Architect
@ Booz Allen Hamilton | USA, TX, San Antonio (3133 General Hudnell Dr) Client Site
DoD SkillBridge - Systems Security Engineer (Active Duty Military Only)
@ Sierra Nevada Corporation | Dayton, OH - OH OD1
Senior Development Security Analyst (REMOTE)
@ Oracle | United States
Software Engineer - Network Security
@ Cloudflare, Inc. | Remote
Software Engineer, Cryptography Services
@ Robinhood | Toronto, ON