Jan. 1, 2024, 2:10 a.m. | Xiao-Yang Liu, Rongyi Zhu, Daochen Zha, Jiechao Gao, Shan Zhong, Meikang Qiu

cs.CR updates on arXiv.org arxiv.org

The surge in interest and application of large language models (LLMs) has
sparked a drive to fine-tune these models to suit specific applications, such
as finance and medical science. However, concerns regarding data privacy have
emerged, especially when multiple stakeholders aim to collaboratively enhance
LLMs using sensitive data. In this scenario, federated learning becomes a
natural choice, allowing decentralized fine-tuning without exposing raw data to
central servers. Motivated by this, we investigate how data privacy can be
ensured in LLM …

aim application applications data data privacy drive federated federated learning finance interest language language models large large language model llms low medical privacy private science sensitive stakeholders suit

Network Security Analyst

@ Wiz | Tel Aviv

Penetration Testing Staff Engineer- Turkey Remote

@ SonicWall | Istanbul, Istanbul, Türkiye

Physical Security Engineer

@ Microsoft | Atlanta, Georgia, United States

Junior Security Consultant (m/w/d)

@ Deutsche Telekom | Berlin, Deutschland

Senior Cybersecurity Product Specialist - Security Endpoint Protection

@ Pacific Gas and Electric Company | San Ramon, CA, US, 94583

Security Engineer, Pre-Sales (PA/NJ)

@ Vectra | US - South New Jersey, US - Pennsylvania