Feb. 7, 2024, 5:10 a.m. | Shanshan Han Wenxuan Wu Baturalp Buyukates Weizhao Jin Qifan Zhang Yuhang Yao Salman Avestimehr

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) systems are vulnerable to adversarial attacks, where malicious clients submit poisoned models to prevent the global model from converging or plant backdoors to induce the global model to misclassify some samples. Current defense methods fall short in real-world FL systems, as they either rely on impractical prior knowledge or introduce accuracy loss even when no attack happens. Also, these methods do not offer a protocol for verifying the execution, leaving participants doubtful about the correct execution of …

adversarial adversarial attacks anomaly detection attacks backdoors bad clients cs.ai cs.cr current defense detection federated federated learning global knowledge malicious proof real systems vulnerable world

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC