Feb. 15, 2024, 5:10 a.m. | Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.09179v1 Announce Type: new
Abstract: The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs. These solutions facilitate tailored LLM creation via natural language prompts without coding. However, the trustworthiness of third-party custom versions of LLMs remains an essential concern. In this paper, we propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs (e.g., GPTs). Specifically, these attacks embed the backdoor into the custom version of LLMs by …

adoption arxiv coding cs.cr cs.lg customization demand development gpts hidden impact language language models large large language model led llm llms natural natural language party prompts rapid risks solutions third third-party trustworthiness

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Technical Support Specialist (Cyber Security)

@ Sigma Software | Warsaw, Poland

OT Security Specialist

@ Adani Group | AHMEDABAD, GUJARAT, India

FS-EGRC-Manager-Cloud Security

@ EY | Bengaluru, KA, IN, 560048