all InfoSec news
Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models
May 3, 2024, 4:15 a.m. | Minhao Bai, Kaiyi Pang, Yongfeng Huang
cs.CR updates on arXiv.org arxiv.org
Abstract: In the rapidly evolving domain of artificial intelligence, safeguarding the intellectual property of Large Language Models (LLMs) is increasingly crucial. Current watermarking techniques against model extraction attacks, which rely on signal insertion in model logits or post-processing of generated text, remain largely heuristic. We propose a novel method for embedding learnable linguistic watermarks in LLMs, aimed at tracing and preventing model extraction attacks. Our approach subtly modifies the LLM's output distribution by introducing controlled noise …
artificial artificial intelligence arxiv attacks cs.ai cs.cl cs.cr current domain extraction generated intellectual property intelligence language language models large linguistic llms model extraction property safeguarding signal techniques text tracing watermarking watermarks
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg
@ Hifield | Strasbourg, France
Lead Security Specialist
@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas
Consultant SOC / CERT H/F
@ Hifield | Sèvres, France