Feb. 7, 2024, 5:10 a.m. | Shanshan Han Wenxuan Wu Baturalp Buyukates Weizhao Jin Qifan Zhang Yuhang Yao Salman Avestimehr

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) systems are vulnerable to adversarial attacks, where malicious clients submit poisoned models to prevent the global model from converging or plant backdoors to induce the global model to misclassify some samples. Current defense methods fall short in real-world FL systems, as they either rely on impractical prior knowledge or introduce accuracy loss even when no attack happens. Also, these methods do not offer a protocol for verifying the execution, leaving participants doubtful about the correct …

adversarial adversarial attacks anomaly detection attacks backdoors bad clients cs.ai cs.cr current defense detection federated federated learning global knowledge malicious proof real systems vulnerable world

Product Management Director - Application Security

@ Salesforce | India - Hyderabad

Security Leader - Ambulatory Monitoring & Diagnostics

@ Philips | Malvern - B1

Senior Security Analyst

@ NVIDIA | US, CA, Santa Clara

Cyber Risk & Reg - Control Testing Manager - BLR

@ PwC | Kolkata (AC) - Bengaluru Quay - Bagmane Tech Park (KSDC)

Security Engineer

@ Salesforce | California - San Francisco

Senior Security Engineer

@ Remitly | Tel Aviv, Israel