all InfoSec news
Practical Homomorphic Aggregation for Byzantine ML. (arXiv:2309.05395v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Due to the large-scale availability of data, machine learning (ML) algorithms
are being deployed in distributed topologies, where different nodes collaborate
to train ML models over their individual data by exchanging model-related
information (e.g., gradients) with a central server. However, distributed
learning schemes are notably vulnerable to two threats. First, Byzantine nodes
can single-handedly corrupt the learning by sending incorrect information to
the server, e.g., erroneous gradients. The standard approach to mitigate such
behavior is to use a non-linear robust …
aggregation algorithms availability data distributed information large machine machine learning ml models nodes scale server threats train vulnerable