all InfoSec news
PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency
April 25, 2024, 7:11 p.m. | Yueqi Xie, Minghong Fang, Neil Zhenqiang Gong
cs.CR updates on arXiv.org arxiv.org
Abstract: Model poisoning attacks are critical security threats to Federated Learning (FL). Existing model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal effectiveness when defenses are deployed, and/or 2) they require knowledge of the model updates or local training data on genuine clients. In this work, we make a key observation that their suboptimal effectiveness arises from only leveraging model-update consistency among malicious clients within individual training rounds, making the attack effect self-cancel …
arxiv attacks clients consistency critical cs.cr data defenses federated federated learning key knowledge limitations local poisoning poisoning attacks security security threats threats training training data updates
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
IT Security Engineer
@ Timocom GmbH | Erkrath, Germany
Consultant SOC / CERT H/F
@ Hifield | Sèvres, France
Privacy Engineer, Implementation Review
@ Meta | Menlo Park, CA | Seattle, WA
Cybersecurity Specialist (Security Engineering)
@ Triton AI Pte Ltd | Singapore, Singapore, Singapore
SOC Analyst
@ Rubrik | Palo Alto
Consultant Tech Advisory H/F
@ Hifield | Sèvres, France