Feb. 26, 2024, 5:11 a.m. | Jiongxiao Wang, Jiazhao Li, Yiquan Li, Xiangyu Qi, Muhao Chen, Junjie Hu, Yixuan Li, Bo Li, Chaowei Xiao

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.14968v1 Announce Type: new
Abstract: Despite the general capabilities of Large Language Models (LLMs) like GPT-4 and Llama-2, these models still request fine-tuning or adaptation with customized data when it comes to meeting the specific business demands and intricacies of tailored use cases. However, this process inevitably introduces new safety threats, particularly against the Fine-tuning based Jailbreak Attack (FJAttack), where incorporating just a few harmful examples into the fine-tuning dataset can significantly compromise the model safety. Though potential defenses have …

adaptation alignment arxiv attack backdoor business capabilities cases cs.cl cs.cr data demands fine-tuning general gpt gpt-4 jailbreak language language models large llama llms meeting process request safety threats use cases

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité

@ Hifield | Sèvres, France

Infrastructure Consultant

@ Telefonica Tech | Belfast, United Kingdom