June 19, 2023, 1:10 a.m. | Ece Isik-Polat, Gorkem Polat, Altan Kocyigit

cs.CR updates on arXiv.org arxiv.org

In federated learning, each participant trains its local model with its own
data and a global model is formed at a trusted server by aggregating model
updates coming from these participants. Since the server has no effect and
visibility on the training procedure of the participants to ensure privacy, the
global model becomes vulnerable to attacks such as data poisoning and model
poisoning. Although many defense algorithms have recently been proposed to
address these attacks, they often make strong assumptions …

attack coming data federated learning global local own privacy procedure server training trains updates visibility

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

SITEC- Systems Security Administrator- Camp HM Smith

@ Peraton | Camp H.M. Smith, HI, United States

Cyberspace Intelligence Analyst

@ Peraton | Fort Meade, MD, United States

General Manager, Cybersecurity, Google Public Sector

@ Google | Virginia, USA; United States

Cyber Security Advisor

@ H&M Group | Stockholm, Sweden

Engineering Team Manager – Security Controls

@ H&M Group | Stockholm, Sweden