all InfoSec news
Improving LoRA in Privacy-preserving Federated Learning
March 20, 2024, 4:11 a.m. | Youbang Sun, Zitao Li, Yaliang Li, Bolin Ding
cs.CR updates on arXiv.org arxiv.org
Abstract: Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models for its good performance and computational efficiency. LoRA injects a product of two trainable rank decomposition matrices over the top of each frozen pre-trained model module. However, when applied in the setting of privacy-preserving federated learning (FL), LoRA may become unstable due to the following facts: 1) the effects of data heterogeneity and multi-step local updates are …
adaptation arxiv computational cs.cr cs.dc cs.lg efficiency federated federated learning fine-tuning good language language models lora low parameter performance popular privacy product task
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Intern, Cyber Security Vulnerability Management
@ Grab | Petaling Jaya, Malaysia
Compliance - Global Privacy Office - Associate - Bengaluru
@ Goldman Sachs | Bengaluru, Karnataka, India
Cyber Security Engineer (m/w/d) Operational Technology
@ MAN Energy Solutions | Oberhausen, DE, 46145
Armed Security Officer - Hospital
@ Allied Universal | Sun Valley, CA, United States
Governance, Risk and Compliance Officer (Africa)
@ dLocal | Lagos (Remote)