all InfoSec news
DeepClean: Machine Unlearning on the Cheap by Resetting Privacy Sensitive Weights using the Fisher Diagonal. (arXiv:2311.10448v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Machine learning models trained on sensitive or private data can
inadvertently memorize and leak that information. Machine unlearning seeks to
retroactively remove such details from model weights to protect privacy. We
contribute a lightweight unlearning algorithm that leverages the Fisher
Information Matrix (FIM) for selective forgetting. Prior work in this area
requires full retraining or large matrix inversions, which are computationally
expensive. Our key insight is that the diagonal elements of the FIM, which
measure the sensitivity of log-likelihood to …
algorithm contribute data fim information leak machine machine learning machine learning models matrix privacy private private data protect remove sensitive