Nov. 20, 2023, 2:10 a.m. | Jiaeli Shi, Najah Ghalyan, Kostis Gourgoulias, John Buford, Sean Moran

cs.CR updates on

Machine learning models trained on sensitive or private data can
inadvertently memorize and leak that information. Machine unlearning seeks to
retroactively remove such details from model weights to protect privacy. We
contribute a lightweight unlearning algorithm that leverages the Fisher
Information Matrix (FIM) for selective forgetting. Prior work in this area
requires full retraining or large matrix inversions, which are computationally
expensive. Our key insight is that the diagonal elements of the FIM, which
measure the sensitivity of log-likelihood to …

algorithm contribute data fim information leak machine machine learning machine learning models matrix privacy private private data protect remove sensitive

More from / cs.CR updates on

Senior Vice President, Cybersecurity and Runtime Operations

@ 2U | US-MD-Lanham//US-Remote

Dreadnought Product Security Lead - Submarines

@ Rolls-Royce | Derby - Jubilee House (UK-JH)

Senior Product Security Engineer

@ Narvar | Hybrid - Bengaluru

Managing Consultant - Advisors Business Development

@ Mastercard | Mumbai, India

Principal Security Engineer

@ Highspot | Vancouver, BC

Incident Response Specialist

@ Wabtec | Bengaluru - KA - IND (ITC Greens)