all InfoSec news
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models. (arXiv:2305.15594v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Large language models (LLMs) are excellent in-context learners. However, the
sensitivity of data contained in prompts raises privacy concerns. Our work
first shows that these concerns are valid: we instantiate a simple but highly
effective membership inference attack against the data used to prompt LLMs. To
address this vulnerability, one could forego prompting and resort to
fine-tuning LLMs with known algorithms for private gradient descent. However,
this comes at the expense of the practicality and efficiency offered by
prompting. Therefore, …
attack context data language language models large llms privacy privacy concerns private prompts simple valid work