April 20, 2023, 1:10 a.m. | Yunlong Mao, Zexi Xin, Zhenyu Li, Jue Hong, Qingyou Yang, Sheng Zhong

cs.CR updates on arXiv.org arxiv.org

Split learning of deep neural networks (SplitNN) has provided a promising
solution to learning jointly for the mutual interest of a guest and a host,
which may come from different backgrounds, holding features partitioned
vertically. However, SplitNN creates a new attack surface for the adversarial
participant, holding back its practical use in the real world. By investigating
the adversarial effects of highly threatening attacks, including property
inference, data reconstruction, and feature hijacking attacks, we identify the
underlying vulnerability of SplitNN …

adversarial attack attacks attack surface back data features hijacking host identify interest may networks neural networks solution space split learning vulnerability world

Senior Security Engineer - Detection and Response

@ Fastly, Inc. | US (Remote)

Application Security Engineer

@ Solidigm | Zapopan, Mexico

Defensive Cyber Operations Engineer-Mid

@ ISYS Technologies | Aurora, CO, United States

Manager, Information Security GRC

@ OneTrust | Atlanta, Georgia

Senior Information Security Analyst | IAM

@ EBANX | Curitiba or São Paulo

Senior Information Security Engineer, Cloud Vulnerability Research

@ Google | New York City, USA; New York, USA