March 20, 2024, 4:11 a.m. | Youbang Sun, Zitao Li, Yaliang Li, Bolin Ding

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.12313v1 Announce Type: cross
Abstract: Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models for its good performance and computational efficiency. LoRA injects a product of two trainable rank decomposition matrices over the top of each frozen pre-trained model module. However, when applied in the setting of privacy-preserving federated learning (FL), LoRA may become unstable due to the following facts: 1) the effects of data heterogeneity and multi-step local updates are …

adaptation arxiv computational cs.cr cs.dc cs.lg efficiency federated federated learning fine-tuning good language language models lora low parameter performance popular privacy product task

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Technical Support Specialist (Cyber Security)

@ Sigma Software | Warsaw, Poland

OT Security Specialist

@ Adani Group | AHMEDABAD, GUJARAT, India

FS-EGRC-Manager-Cloud Security

@ EY | Bengaluru, KA, IN, 560048