March 8, 2023, 2:10 a.m. | Mathilde Raynal, Dario Pasquini, Carmela Troncoso

cs.CR updates on arXiv.org arxiv.org

Decentralized Learning (DL) is a peer--to--peer learning approach that allows
a group of users to jointly train a machine learning model. To ensure
correctness, DL should be robust, i.e., Byzantine users must not be able to
tamper with the result of the collaboration. In this paper, we introduce two
\textit{new} attacks against DL where a Byzantine user can: make the network
converge to an arbitrary model of their choice, and exclude an arbitrary user
from the learning process. We demonstrate …

attacks collaboration converge correctness decentralized federated learning machine machine learning network result train

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cloud Security Analyst

@ Cloud Peritus | Bengaluru, India

Cyber Program Manager - CISO- United States – Remote

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700

Network Security Engineer (AEGIS)

@ Peraton | Virginia Beach, VA, United States

SC2022-002065 Cyber Security Incident Responder (NS) - MON 13 May

@ EMW, Inc. | Mons, Wallonia, Belgium

Information Systems Security Engineer

@ Booz Allen Hamilton | USA, GA, Warner Robins (300 Park Pl Dr)