all InfoSec news
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models. (arXiv:2201.12675v2 [cs.LG] UPDATED)
cs.CR updates on arXiv.org arxiv.org
A central tenet of Federated learning (FL), which trains models without
centralizing user data, is privacy. However, previous work has shown that the
gradient updates used in FL can leak user information. While the most
industrial uses of FL are for text applications (e.g. keystroke prediction),
nearly all attacks on FL privacy have focused on simple image classifiers. We
propose a novel attack that reveals private user text by deploying malicious
parameter vectors, and which succeeds even with mini-batches, multiple …
applications breach centralizing data federated learning industrial information language language models leak prediction privacy tenet text trains transformers updates user data work