May 13, 2024, 4:11 a.m. | Yujie Zhang, Neil Gong, Michael K. Reiter

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.06206v1 Announce Type: new
Abstract: Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data. Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks, where adversaries poison the local training data of a subset of clients using a backdoor trigger, aiming to make the aggregated model produce malicious results when the same backdoor condition is met by an inference-time input. Existing backdoor attacks in FL …

adversaries arxiv attacks backdoor backdoor attacks benefits cs.ai cs.cr cs.lg data data poisoning decentralized federated federated learning local machine machine learning poisoning privacy private private data scalability sharing train training training data trigger updates

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Senior Security Researcher - Linux MacOS EDR (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Sr. Manager, NetSec GTM Programs

@ Palo Alto Networks | Santa Clara, CA, United States

SOC Analyst I

@ Fortress Security Risk Management | Cleveland, OH, United States