all InfoSec news
Personalized Federated Learning via Stacking
April 18, 2024, 4:11 a.m. | Emilio Cantu-Cervini
cs.CR updates on arXiv.org arxiv.org
Abstract: Traditional Federated Learning (FL) methods typically train a single global model collaboratively without exchanging raw data. In contrast, Personalized Federated Learning (PFL) techniques aim to create multiple models that are better tailored to individual clients' data. We present a novel personalization approach based on stacked generalization where clients directly send each other privacy-preserving models to be used as base models to train a meta-model on private data. Our approach is flexible, accommodating various privacy-preserving techniques …
aim arxiv clients cs.cr cs.dc cs.lg data federated federated learning global novel personalization send single techniques train
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
SITEC- Systems Security Administrator- Camp HM Smith
@ Peraton | Camp H.M. Smith, HI, United States
Cyberspace Intelligence Analyst
@ Peraton | Fort Meade, MD, United States
General Manager, Cybersecurity, Google Public Sector
@ Google | Virginia, USA; United States
Cyber Security Advisor
@ H&M Group | Stockholm, Sweden
Engineering Team Manager – Security Controls
@ H&M Group | Stockholm, Sweden