March 5, 2024, 3:11 p.m. | Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.01218v1 Announce Type: cross
Abstract: The high cost of model training makes it increasingly desirable to develop techniques for unlearning. These techniques seek to remove the influence of a training example without having to retrain the model from scratch. Intuitively, once a model has unlearned, an adversary that interacts with the model should no longer be able to tell whether the unlearned example was included in the model's training set or not. In the privacy literature, this is known as …

adversary arxiv cost cs.cr cs.lg high influence model training privacy remove techniques training

Security Specialist

@ Nestlé | St. Louis, MO, US, 63164

Cybersecurity Analyst

@ Dana Incorporated | Pune, MH, IN, 411057

Sr. Application Security Engineer

@ CyberCube | United States

Linux DevSecOps Administrator (Remote)

@ Accenture Federal Services | Arlington, VA

Cyber Security Intern or Co-op

@ Langan | Parsippany, NJ, US, 07054-2172

Security Advocate - Application Security

@ Datadog | New York, USA, Remote