all InfoSec news
Training Differentially Private Graph Neural Networks with Random Walk Sampling. (arXiv:2301.00738v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Deep learning models are known to put the privacy of their training data at
risk, which poses challenges for their safe and ethical release to the public.
Differentially private stochastic gradient descent is the de facto standard for
training neural networks without leaking sensitive information about the
training data. However, applying it to models for graph-structured data poses a
novel challenge: unlike with i.i.d. data, sensitive information about a node in
a graph cannot only leak through its gradients, but …
challenges data deep learning information networks neural networks privacy private public random release risk safe sensitive information standard training