March 1, 2024, 5:11 a.m. | Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople

cs.CR updates on arXiv.org arxiv.org

arXiv:2210.01834v3 Announce Type: replace-cross
Abstract: Federated learning enables training high-utility models across several clients without directly sharing their private data. As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients. Despite the theoretical and empirical success in defending against attacks that aim to degrade models' utility, defense against backdoor attacks that increase model accuracy on backdoor samples exclusively without hurting the utility on other samples remains challenging. To this end, …

adversarial adversarial attacks aim arxiv attacks backdoor backdoor attacks clients cs.cr cs.lg data defending federated federated learning high malicious presence private private data sharing training utility vulnerable

Information System Security Officer (ISSO)

@ LinQuest | Boulder, Colorado, United States

Project Manager - Security Engineering

@ MongoDB | New York City

Security Continuous Improvement Program Manager (m/f/d)

@ METRO/MAKRO | Düsseldorf, Germany

Senior JavaScript Security Engineer, Tools

@ MongoDB | New York City

Principal Platform Security Architect

@ Microsoft | Redmond, Washington, United States

Staff Cyber Security Engineer (Emerging Platforms)

@ NBCUniversal | Englewood Cliffs, NEW JERSEY, United States