April 29, 2024, 4:12 a.m. | Jiaeli Shi, Najah Ghalyan, Kostis Gourgoulias, John Buford, Sean Moran

cs.CR updates on arXiv.org arxiv.org

arXiv:2311.10448v2 Announce Type: replace-cross
Abstract: Machine learning models trained on sensitive or private data can inadvertently memorize and leak that information. Machine unlearning seeks to retroactively remove such details from model weights to protect privacy. We contribute a lightweight unlearning algorithm that leverages the Fisher Information Matrix (FIM) for selective forgetting. Prior work in this area requires full retraining or large matrix inversions, which are computationally expensive. Our key insight is that the diagonal elements of the FIM, which measure …

algorithm arxiv can contribute cs.cr cs.cv cs.lg data information leak machine machine learning machine learning models privacy private private data protect remove sensitive

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité

@ Hifield | Sèvres, France

Infrastructure Consultant

@ Telefonica Tech | Belfast, United Kingdom