all InfoSec news
Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization
Feb. 15, 2024, 5:10 a.m. | Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang
cs.CR updates on arXiv.org arxiv.org
Abstract: The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs. These solutions facilitate tailored LLM creation via natural language prompts without coding. However, the trustworthiness of third-party custom versions of LLMs remains an essential concern. In this paper, we propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs (e.g., GPTs). Specifically, these attacks embed the backdoor into the custom version of LLMs by …
adoption arxiv coding cs.cr cs.lg customization demand development gpts hidden impact language language models large large language model led llm llms natural natural language party prompts rapid risks solutions third third-party trustworthiness
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Information Security Engineer, Sr. (Container Hardening)
@ Rackner | San Antonio, TX
BaaN IV Techno-functional consultant-On-Balfour
@ Marlabs | Piscataway, US
Senior Security Analyst
@ BETSOL | Bengaluru, India
Security Operations Centre Operator
@ NEXTDC | West Footscray, Australia
Senior Network and Security Research Officer
@ University of Toronto | Toronto, ON, CA