March 12, 2024, 4:11 a.m. | Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.06319v1 Announce Type: cross
Abstract: Federated learning (FL) is a distributed machine learning paradigm that enables training models on decentralized data. The field of FL security against poisoning attacks is plagued with confusion due to the proliferation of research that makes different assumptions about the capabilities of adversaries and the adversary models they operate under. Our work aims to clarify this confusion by presenting a comprehensive analysis of the various poisoning attacks and defensive aggregation rules (AGRs) proposed in the …

adversaries arxiv attacks capabilities clients compromised cs.cr cs.lg data decentralized decentralized data distributed fake federated federated learning machine machine learning making malicious paradigm poisoning poisoning attacks proliferation research security training

Information Security Cyber Risk Analyst

@ Intel | USA - AZ - Chandler

Senior Cloud Security Engineer (Fullstack)

@ Grab | Petaling Jaya, Malaysia

Principal Product Security Engineer

@ Oracle | United States

Cybersecurity Strategy Director

@ Proofpoint | Sunnyvale, CA

Information Security Consultant/Auditor

@ Devoteam | Lisboa, Portugal

IT Security Engineer til Netcompany IT Services

@ Netcompany | Copenhagen, Denmark