all InfoSec news
FedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning
March 12, 2024, 4:10 a.m. | Zhuo Zhang, Jingyuan Zhang, Jintao Huang, Lizhen Qu, Hongzhi Zhang, Zenglin Xu
cs.CR updates on arXiv.org arxiv.org
Abstract: Instruction tuning has proven essential for enhancing the performance of large language models (LLMs) in generating human-aligned responses. However, collecting diverse, high-quality instruction data for tuning poses challenges, particularly in privacy-sensitive domains. Federated instruction tuning (FedIT) has emerged as a solution, leveraging federated learning from multiple data owners while preserving privacy. Yet, it faces challenges due to limited instruction data and vulnerabilities to training data extraction attacks. To address these issues, we propose a novel …
arxiv challenges collecting cs.ai cs.cr data domains federated federated learning high human language language models large llms performance privacy quality sensitive solution
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Engineer
@ Commit | San Francisco
Trainee (m/w/d) Security Engineering CTO Taskforce Team
@ CHECK24 | Berlin, Germany
Security Engineer
@ EY | Nicosia, CY, 1087
Information System Security Officer (ISSO) Level 3-COMM Job#455
@ Allen Integrated Solutions | Chantilly, Virginia, United States
Application Security Engineer
@ Wise | London, United Kingdom