May 13, 2024, 4:11 a.m. | Yujie Zhang, Neil Gong, Michael K. Reiter

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.06206v1 Announce Type: new
Abstract: Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data. Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks, where adversaries poison the local training data of a subset of clients using a backdoor trigger, aiming to make the aggregated model produce malicious results when the same backdoor condition is met by an inference-time input. Existing backdoor attacks in FL …

adversaries arxiv attacks backdoor backdoor attacks benefits cs.ai cs.cr cs.lg data data poisoning decentralized federated federated learning local machine machine learning poisoning privacy private private data scalability sharing train training training data trigger updates

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Cyber Security Specialist

@ Ball Corporation | SAO JOSE DOS CAMPOS, São Paulo, BR, 12242-000

Cybersecurity Strategy & Data Systems Manager

@ Mitsubishi Heavy Industries | Orlando, FL, US, 32809