all InfoSec news
Instructional Fingerprinting of Large Language Models
April 4, 2024, 4:10 a.m. | Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, Muhao Chen
cs.CR updates on arXiv.org arxiv.org
Abstract: The exorbitant cost of training Large language models (LLMs) from scratch makes it essential to fingerprint the models to protect intellectual property via ownership authentication and to ensure downstream users and developers comply with their license terms (e.g. restricting commercial use). In this study, we present a pilot study on LLM fingerprinting as a form of very lightweight instruction tuning. Model publisher specifies a confidential private key and implants it as an instruction backdoor that …
arxiv cs.ai cs.cl cs.cr cs.lg fingerprinting language language models large
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
DevSecOps Engineer
@ Material Bank | Remote
Instrumentation & Control Engineer - Cyber Security
@ ASSYSTEM | Bridgwater, United Kingdom
Security Consultant
@ Tenable | MD - Columbia - Headquarters
Management Consultant - Cybersecurity - Internship
@ Wavestone | Hong Kong, Hong Kong
TRANSCOM IGC - Cybersecurity Engineer
@ IT Partners, Inc | St. Louis, Missouri, United States
Manager, Security Operations Engineering (EMEA)
@ GitLab | Remote, EMEA