all InfoSec news
Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated Learning. (arXiv:2308.04373v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
The main premise of federated learning is that machine learning model updates
are computed locally, in particular to preserve user data privacy, as those
never leave the perimeter of their device. This mechanism supposes the general
model, once aggregated, to be broadcast to collaborating and non malicious
nodes. However, without proper defenses, compromised clients can easily probe
the model inside their local memory in search of adversarial examples. For
instance, considering image-based applications, adversarial examples consist of
imperceptibly perturbed images …
attacks broadcast data data privacy device evasion federated learning general locally machine machine learning main malicious nodes non perimeter premise privacy transformers updates user data