all InfoSec news
Defense Mechanisms Against Training-Hijacking Attacks in Split Learning. (arXiv:2302.08618v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Distributed deep learning frameworks enable more efficient and privacy-aware
training of deep neural networks across multiple clients. Split learning
achieves this by splitting a neural network between a client and a server such
that the client computes the initial set of layers, and the server computes the
rest. However, this method introduces a unique attack vector for a malicious
server attempting to recover the client's private inputs: the server can direct
the client model towards learning any task of its …
attack attacks attack vector aware client clients deep learning defense distributed enable frameworks hijacking inputs malicious network networks neural network neural networks privacy recover rest server split learning training