all InfoSec news
Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering. (arXiv:2109.05872v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Gradient-based training in federated learning is known to be vulnerable to
faulty/malicious clients, which are often modeled as Byzantine clients. To this
end, previous work either makes use of auxiliary data at parameter server to
verify the received gradients (e.g., by computing validation error rate) or
leverages statistic-based methods (e.g. median and Krum) to identify and remove
malicious gradients from Byzantine clients. In this paper, we remark that
auxiliary data may not always be available in practice and focus on …
clients computing data end error federated learning malicious parameter rate server statistic training validation verify vulnerable work