all InfoSec news
LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers. (arXiv:2305.18396v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Prior works have attempted to build private inference frameworks for
transformer-based large language models (LLMs) in a server-client setting,
where the server holds the model parameters and the client inputs the private
data for inference. However, these frameworks impose significant overhead when
the private inputs are forward propagated through the original LLMs. In this
paper, we show that substituting the computation- and communication-heavy
operators in the transformer architecture with privacy-computing friendly
approximations can greatly reduce the private inference costs with …
build client computing data encrypted forward frameworks inputs language language models large llms privacy private private data server transformers understand