all InfoSec news
Invariant Aggregator for Defending against Federated Backdoor Attacks
March 1, 2024, 5:11 a.m. | Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople
cs.CR updates on arXiv.org arxiv.org
Abstract: Federated learning enables training high-utility models across several clients without directly sharing their private data. As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients. Despite the theoretical and empirical success in defending against attacks that aim to degrade models' utility, defense against backdoor attacks that increase model accuracy on backdoor samples exclusively without hurting the utility on other samples remains challenging. To this end, …
adversarial adversarial attacks aim arxiv attacks backdoor backdoor attacks clients cs.cr cs.lg data defending federated federated learning high malicious presence private private data sharing training utility vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Corporate Intern - Information Security (Year Round)
@ Associated Bank | US WI Remote
Senior Offensive Security Engineer
@ CoStar Group | US-DC Washington, DC