all InfoSec news
Protecting Language Generation Models via Invisible Watermarking. (arXiv:2302.03162v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Language generation models have been an increasingly powerful enabler for
many applications. Many such models offer free or affordable API access, which
makes them potentially vulnerable to model extraction attacks through
distillation. To protect intellectual property (IP) and ensure fair use of
these models, various techniques such as lexical watermarking and synonym
replacement have been proposed. However, these methods can be nullified by
obvious countermeasures such as "synonym randomization". To address this issue,
we propose GINSEW, a novel method to …
access address api applications attacks countermeasures fair fair use free intellectual property language offer protect protecting randomization synonym techniques vulnerable watermarking