Feb. 16, 2024, 5:10 a.m. | Enrique M\'armol Campos, Aurora Gonz\'alez Vidal, Jos\'e Luis Hern\'andez Ramos, Antonio Skarmeta

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.10082v1 Announce Type: cross
Abstract: Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments. Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks, which can significantly degrade model performance and hinder convergence. The effectiveness of existing approaches to mitigate complex attacks, such as median, trimmed mean, or Krum aggregation functions, has been only partially demonstrated in the case of specific attacks. Our study …

aggregation arxiv attacks can cs.cr cs.lg dynamic federated federated learning function machine machine learning performance poisoning poisoning attacks privacy privacy concerns security security attacks vulnerable well-known

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC