all InfoSec news
Quantifying Privacy Risks of Prompts in Visual Prompt Learning. (arXiv:2310.11970v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Large-scale pre-trained models are increasingly adapted to downstream tasks
through a new paradigm called prompt learning. In contrast to fine-tuning,
prompt learning does not update the pre-trained model's parameters. Instead, it
only learns an input perturbation, namely prompt, to be added to the downstream
task data for predictions. Given the fast development of prompt learning, a
well-generalized prompt inevitably becomes a valuable asset as significant
effort and proprietary data are used to create it. This naturally raises the
question of …
called data fine-tuning input large paradigm predictions privacy privacy risks prompt prompts risks scale task update