July 19, 2023, 1:10 a.m. | Sungwon Park, Sungwon Han, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha

cs.CR updates on arXiv.org arxiv.org

Federated learning enables learning from decentralized data sources without
compromising privacy, which makes it a crucial technique. However, it is
vulnerable to model poisoning attacks, where malicious clients interfere with
the training process. Previous defense mechanisms have focused on the
server-side by using careful model aggregation, but this may not be effective
when the data is not identically distributed or when attackers can access the
information of benign clients. In this paper, we propose a new defense
mechanism that focuses …

aggregation attack attacks client clients client-side data data sources decentralized decentralized data defense federated learning malicious may poisoning poisoning attacks privacy process server training vulnerable

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Incident Response Lead(IR)

@ Blue Yonder | Hyderabad

Comcast Cybersecurity: Privacy Operations Executive Director

@ Comcast | PA - Philadelphia, 1701 John F Kennedy Blvd