March 1, 2024, 5:11 a.m. | Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople

cs.CR updates on arXiv.org arxiv.org

arXiv:2210.01834v3 Announce Type: replace-cross
Abstract: Federated learning enables training high-utility models across several clients without directly sharing their private data. As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients. Despite the theoretical and empirical success in defending against attacks that aim to degrade models' utility, defense against backdoor attacks that increase model accuracy on backdoor samples exclusively without hurting the utility on other samples remains challenging. To this end, …

adversarial adversarial attacks aim arxiv attacks backdoor backdoor attacks clients cs.cr cs.lg data defending federated federated learning high malicious presence private private data sharing training utility vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC