all InfoSec news
When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning. (arXiv:2301.10400v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Federated Learning has become a widely-used framework which allows learning a
global model on decentralized local datasets under the condition of protecting
local data privacy. However, federated learning faces severe optimization
difficulty when training samples are not independently and identically
distributed (non-i.i.d.). In this paper, we point out that the client sampling
practice plays a decisive role in the aforementioned optimization difficulty.
We find that the negative client sampling will cause the merged data
distribution of currently sampled clients heavily …
client data data privacy datasets decentralized distributed federated learning find framework global local non optimization point practice privacy protecting role training trust under