all InfoSec news
Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks. (arXiv:2304.09515v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Split learning of deep neural networks (SplitNN) has provided a promising
solution to learning jointly for the mutual interest of a guest and a host,
which may come from different backgrounds, holding features partitioned
vertically. However, SplitNN creates a new attack surface for the adversarial
participant, holding back its practical use in the real world. By investigating
the adversarial effects of highly threatening attacks, including property
inference, data reconstruction, and feature hijacking attacks, we identify the
underlying vulnerability of SplitNN …
adversarial attack attacks attack surface back data features hijacking host identify interest may networks neural networks solution space split learning vulnerability world