May 2, 2023, 1:11 a.m. | Jian Xu, Shao-Lun Huang, Linqi Song, Tian Lan

cs.CR updates on arXiv.org arxiv.org

Gradient-based training in federated learning is known to be vulnerable to
faulty/malicious clients, which are often modeled as Byzantine clients. To this
end, previous work either makes use of auxiliary data at parameter server to
verify the received gradients (e.g., by computing validation error rate) or
leverages statistic-based methods (e.g. median and Krum) to identify and remove
malicious gradients from Byzantine clients. In this paper, we remark that
auxiliary data may not always be available in practice and focus on …

clients computing data end error federated learning malicious parameter rate server statistic training validation verify vulnerable work

QA Customer Response Engineer

@ ORBCOMM | Sterling, VA Office, Sterling, VA, US

Enterprise Security Architect

@ Booz Allen Hamilton | USA, TX, San Antonio (3133 General Hudnell Dr) Client Site

DoD SkillBridge - Systems Security Engineer (Active Duty Military Only)

@ Sierra Nevada Corporation | Dayton, OH - OH OD1

Senior Development Security Analyst (REMOTE)

@ Oracle | United States

Software Engineer - Network Security

@ Cloudflare, Inc. | Remote

Software Engineer, Cryptography Services

@ Robinhood | Toronto, ON