all InfoSec news
Defending against Data Poisoning Attacks in Federated Learning via User Elimination
April 22, 2024, 4:10 a.m. | Nick Galanis
cs.CR updates on arXiv.org arxiv.org
Abstract: In the evolving landscape of Federated Learning (FL), a new type of attacks concerns the research community, namely Data Poisoning Attacks, which threaten the model integrity by maliciously altering training data. This paper introduces a novel defensive framework focused on the strategic elimination of adversarial users within a federated model. We detect those anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques, …
arxiv attacks community cs.cr data data poisoning defending defensive federated federated learning framework integrity novel poisoning poisoning attacks research strategic threaten training training data
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Offensive Security Engineer
@ Ivanti | United States, Remote
Senior Security Engineer I
@ Samsara | Remote - US
Senior Principal Information System Security Engineer
@ Chameleon Consulting Group | Herndon, VA
Junior Detections Engineer
@ Kandji | San Francisco
Data Security Engineer/ Architect - Remote United States
@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700