Nov. 20, 2023, 2:10 a.m. | Jiaeli Shi, Najah Ghalyan, Kostis Gourgoulias, John Buford, Sean Moran

cs.CR updates on arXiv.org arxiv.org

Machine learning models trained on sensitive or private data can
inadvertently memorize and leak that information. Machine unlearning seeks to
retroactively remove such details from model weights to protect privacy. We
contribute a lightweight unlearning algorithm that leverages the Fisher
Information Matrix (FIM) for selective forgetting. Prior work in this area
requires full retraining or large matrix inversions, which are computationally
expensive. Our key insight is that the diagonal elements of the FIM, which
measure the sensitivity of log-likelihood to …

algorithm contribute data fim information leak machine machine learning machine learning models matrix privacy private private data protect remove sensitive

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Application Security Engineer - Remote Friendly

@ Unit21 | San Francisco,CA; New York City; Remote USA;

Cloud Security Specialist

@ AppsFlyer | Herzliya

Malware Analysis Engineer - Canberra, Australia

@ Apple | Canberra, Australian Capital Territory, Australia

Product CISO

@ Fortinet | Sunnyvale, CA, United States

Manager, Security Engineering

@ Thrive | United States - Remote