May 14, 2024, 4:11 a.m. | Xiaolan Gu, Ming Li, Li Xiong

cs.CR updates on arXiv.org arxiv.org

arXiv:2306.12608v2 Announce Type: replace
Abstract: Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively while keeping their datasets local and only exchanging the gradient or model updates with a coordinating server. Existing FL protocols are vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we focus on simultaneously achieving differential privacy (DP) and Byzantine robustness for cross-silo …

aim arxiv attacks client clients compromise cs.cr data data privacy datasets federated federated learning local machine machine learning machine learning models momentum privacy private protocols robustness server train updates vulnerable

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Principal Software Engineer - Threat Detection

@ AppOmni | Remote, USA

Senior Security & GRC Lead

@ GoHenry | London, England, United Kingdom