all InfoSec news
Passive Inference Attacks on Split Learning via Adversarial Regularization. (arXiv:2310.10483v2 [cs.CR] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Split Learning (SL) has emerged as a practical and efficient alternative to
traditional federated learning. While previous attempts to attack SL have often
relied on overly strong assumptions or targeted easily exploitable models, we
seek to develop more practical attacks. We introduce SDAR, a novel attack
framework against SL with an honest-but-curious server. SDAR leverages
auxiliary data and adversarial regularization to learn a decodable simulator of
the client's private model, which can effectively infer the client's private
features under the …
adversarial attack attack framework attacks federated federated learning framework novel passive split learning