all InfoSec news
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
May 13, 2024, 4:11 a.m. | Yujie Zhang, Neil Gong, Michael K. Reiter
cs.CR updates on arXiv.org arxiv.org
Abstract: Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data. Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks, where adversaries poison the local training data of a subset of clients using a backdoor trigger, aiming to make the aggregated model produce malicious results when the same backdoor condition is met by an inference-time input. Existing backdoor attacks in FL …
adversaries arxiv attacks backdoor backdoor attacks benefits cs.ai cs.cr cs.lg data data poisoning decentralized federated federated learning local machine machine learning poisoning privacy private private data scalability sharing train training training data trigger updates
More from arxiv.org / cs.CR updates on arXiv.org
Proactive Detection of Voice Cloning with Localized Watermarking
2 days, 20 hours ago |
arxiv.org
NFT Wash Trading: Direct vs. Indirect Estimation
2 days, 21 hours ago |
arxiv.org
Backdoor Attack with Sparse and Invisible Trigger
2 days, 21 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Senior Security Researcher - Linux MacOS EDR (Cortex)
@ Palo Alto Networks | Tel Aviv-Yafo, Israel
Sr. Manager, NetSec GTM Programs
@ Palo Alto Networks | Santa Clara, CA, United States
SOC Analyst I
@ Fortress Security Risk Management | Cleveland, OH, United States