all InfoSec news
Inductive Graph Unlearning. (arXiv:2304.03093v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
As a way to implement the "right to be forgotten" in machine learning,
\textit{machine unlearning} aims to completely remove the contributions and
information of the samples to be deleted from a trained model without affecting
the contributions of other samples. Recently, many frameworks for machine
unlearning have been proposed, and most of them focus on image and text data.
To extend machine unlearning to graph data, \textit{GraphEraser} has been
proposed. However, a critical issue is that \textit{GraphEraser} is
specifically designed …
attributes critical data focus frameworks information machine machine learning remove right to be forgotten text