Web: http://arxiv.org/abs/2201.04018

Jan. 12, 2022, 2:20 a.m. | Grzegorz Gawron, Philip Stubbings

cs.CR updates on arXiv.org arxiv.org

Split learning and differential privacy are technologies with growing
potential to help with privacy-compliant advanced analytics on distributed
datasets. Attacks against split learning are an important evaluation tool and
have been receiving increased research attention recently. This work's
contribution is applying a recent feature space hijacking attack (FSHA) to the
learning process of a split neural network enhanced with differential privacy
(DP), using a client-side off-the-shelf DP optimizer. The FSHA attack obtains
client's private data reconstruction with low error rates …

attacks hijacking learning space

More from arxiv.org / cs.CR updates on arXiv.org

Head of Information Security

@ Canny | Remote

Information Technology Specialist (INFOSEC)

@ U.S. Securities & Exchange Commission | Washington, D.C.

Information Security Manager - $90K-$180K - MANAG002176

@ Sound Transit | Seattle, WA

Sr. Software Security Architect

@ SAS | Remote

Senior Incident Responder

@ CipherTechs, Inc. | Remote

Data Security DevOps Engineer Senior/Intermediate

@ University of Michigan - ITS | Ann Arbor, MI