all InfoSec news
Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning. (arXiv:2312.17493v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
The surge in interest and application of large language models (LLMs) has
sparked a drive to fine-tune these models to suit specific applications, such
as finance and medical science. However, concerns regarding data privacy have
emerged, especially when multiple stakeholders aim to collaboratively enhance
LLMs using sensitive data. In this scenario, federated learning becomes a
natural choice, allowing decentralized fine-tuning without exposing raw data to
central servers. Motivated by this, we investigate how data privacy can be
ensured in LLM …
aim application applications data data privacy drive federated federated learning finance interest language language models large large language model llms low medical privacy private science sensitive stakeholders suit