April 11, 2022, 1:20 a.m. | Yilin Zhou, Qian Chen, Zilong Wang, Dan Xiao, Jiawei Chen

cs.CR updates on arXiv.org arxiv.org

A traditional federated learning (FL) allows clients to collaboratively train
a global model under the coordination of a central server, which sparks great
interests in exploiting the private data distributed on clients. However, once
the central server suffers from a single point of failure, it will lead to
system crash. In addition, FL usually involves a large number of clients, which
requires expensive communication costs. These challenges inspire a
communication-efficient design of decentralized FL. In this paper, we propose
an …

cluster communication large networks peer-to-peer scale

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Advisory Red Consultant

@ Security Risk Advisors | Philadelphia, Pennsylvania, United States

Cyber Business Transformation Change Analyst

@ National Grid | Warwick, GB, CV34 6DA

Cyber Security Analyst

@ Ford Motor Company | Mexico City, MEX, Mexico

Associate Administrator, Cyber Security Governance (Fort Myers)

@ Millennium Physician Group | Fort Myers, FL, United States

Embedded GSOC Lead Operator, Events

@ Sibylline Ltd | Seattle, WA, United States