Oct. 19, 2022, 2:20 a.m. | Fei Zheng, Chaochao Chen, Binhui Yao, Xiaolin Zheng

cs.CR updates on arXiv.org arxiv.org

As a practical privacy-preserving learning method, split learning has drawn
much attention in academia and industry. However, its security is constantly
being questioned since the intermediate results are shared during training and
inference. In this paper, we focus on the privacy leakage problem caused by the
trained split model, i.e., the attacker can use a few labeled samples to
fine-tune the bottom model, and gets quite good performance. To prevent such
kind of privacy leakage, we propose the potential energy …

energy loss making split learning

Senior Security Engineer - Detection and Response

@ Fastly, Inc. | US (Remote)

Application Security Engineer

@ Solidigm | Zapopan, Mexico

Defensive Cyber Operations Engineer-Mid

@ ISYS Technologies | Aurora, CO, United States

Manager, Information Security GRC

@ OneTrust | Atlanta, Georgia

Senior Information Security Analyst | IAM

@ EBANX | Curitiba or São Paulo

Senior Information Security Engineer, Cloud Vulnerability Research

@ Google | New York City, USA; New York, USA