Sept. 18, 2023, 1:10 a.m. | Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John Stephan, Martin Zuber

cs.CR updates on arXiv.org arxiv.org

Due to the large-scale availability of data, machine learning (ML) algorithms
are being deployed in distributed topologies, where different nodes collaborate
to train ML models over their individual data by exchanging model-related
information (e.g., gradients) with a central server. However, distributed
learning schemes are notably vulnerable to two threats. First, Byzantine nodes
can single-handedly corrupt the learning by sending incorrect information to
the server, e.g., erroneous gradients. The standard approach to mitigate such
behavior is to use a non-linear robust …

aggregation algorithms availability data distributed information large machine machine learning ml models nodes scale server threats train vulnerable

Cyber Security Network Engineer

@ Nine | North Sydney, Australia

Professional, IAM Security

@ Ingram Micro | Manila Shared Services Center

Principal Windows Threat & Detection Security Researcher (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Security Engineer - IT Infra Security Architecture

@ Coupang | Seoul, South Korea

Senior Security Engineer

@ LiquidX | Singapore, Central Singapore, Singapore

Application Security Engineer

@ Solidigm | Zapopan, Mexico