Feb. 7, 2024, 5:10 a.m. | Shanshan Han Wenxuan Wu Baturalp Buyukates Weizhao Jin Qifan Zhang Yuhang Yao Salman Avestimehr

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) systems are vulnerable to adversarial attacks, where malicious clients submit poisoned models to prevent the global model from converging or plant backdoors to induce the global model to misclassify some samples. Current defense methods fall short in real-world FL systems, as they either rely on impractical prior knowledge or introduce accuracy loss even when no attack happens. Also, these methods do not offer a protocol for verifying the execution, leaving participants doubtful about the correct execution of …

adversarial adversarial attacks anomaly detection attacks backdoors bad clients cs.ai cs.cr current defense detection federated federated learning global knowledge malicious proof real systems vulnerable world

Sr. Staff Security Engineer

@ Databricks | San Francisco, California

Security Engineer

@ Nomi Health | Austin, Texas

Senior Principal Consultant, Security Architecture

@ 6point6 | Manchester, United Kingdom

Cyber Policy Advisor

@ IntelliBridge | McLean, VA, McLean, VA, US

TW Full Stack Software Engineer (Access Control & Intrusion Systems)

@ Bosch Group | Taipei, Taiwan

Cyber Software Engineer

@ Peraton | Annapolis Junction, MD, United States