April 13, 2023, 1:10 a.m. | Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, Qiang Yang

cs.CR updates on arXiv.org arxiv.org

In federated learning, benign participants aim to optimize a global model
collaboratively. However, the risk of \textit{privacy leakage} cannot be
ignored in the presence of \textit{semi-honest} adversaries. Existing research
has focused either on designing protection mechanisms or on inventing attacking
mechanisms. While the battle between defenders and attackers seems
never-ending, we are concerned with one critical question: is it possible to
prevent potential attacks in advance? To address this, we propose the first
game-theoretic framework that considers both FL defenders …

address adversaries aim attackers attacks computational critical defenders federated learning framework game global privacy protection question research risk terms

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Officer Level 1 (L1)

@ NTT DATA | Virginia, United States of America

Alternance - Analyste VOC - Cybersécurité - Île-De-France

@ Sopra Steria | Courbevoie, France

Senior Security Researcher, SIEM

@ Huntress | Remote US or Remote CAN

Cyber Security Engineer Lead

@ ASSYSTEM | Bridgwater, United Kingdom