Feb. 29, 2024, 5:11 a.m. | Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, Qiang Yang

cs.CR updates on arXiv.org arxiv.org

arXiv:2304.05836v3 Announce Type: replace-cross
Abstract: In federated learning, benign participants aim to optimize a global model collaboratively. However, the risk of \textit{privacy leakage} cannot be ignored in the presence of \textit{semi-honest} adversaries. Existing research has focused either on designing protection mechanisms or on inventing attacking mechanisms. While the battle between defenders and attackers seems never-ending, we are concerned with one critical question: is it possible to prevent potential attacks in advance? To address this, we propose the first game-theoretic framework …

adversaries aim arxiv attackers cs.ai cs.cr cs.gt cs.lg defenders federated federated learning framework game global presence privacy protection research risk semi

Technical Support Engineer - Cyber Security

@ Microsoft | Taipei, Taipei City, Taiwan

Senior Security Engineer

@ Workato | Barcelona, Spain

Regional Cybersecurity Specialist

@ Bayer | Singapore, Singapore, SG

Cyber Security Network Engineer

@ Nine | North Sydney, Australia

Professional, IAM Security

@ Ingram Micro | Manila Shared Services Center

Principal Windows Threat & Detection Security Researcher (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel