all InfoSec news
Digital Forgetting in Large Language Models: A Survey of Unlearning Methods
April 3, 2024, 4:10 a.m. | Alberto Blanco-Justicia, Najeeb Jebreel, Benet Manzanares, David S\'anchez, Josep Domingo-Ferrer, Guillem Collell, Kuan Eeik Tan
cs.CR updates on arXiv.org arxiv.org
Abstract: The objective of digital forgetting is, given a model with undesirable knowledge or behavior, obtain a new model where the detected issues are no longer present. The motivations for forgetting include privacy protection, copyright protection, elimination of biases and discrimination, and prevention of harmful content generation. Effective digital forgetting has to be effective (meaning how well the new model has forgotten the undesired knowledge/behavior), retain the performance of the original model on the desirable tasks, …
arxiv biases copyright copyright protection cs.ai cs.cr cs.lg digital discrimination knowledge language language models large prevention privacy protection survey
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Principal Security Engineer
@ Elsevier | Home based-Georgia
Infrastructure Compliance Engineer
@ NVIDIA | US, CA, Santa Clara
Information Systems Security Engineer (ISSE) / Cybersecurity SME
@ Green Cell Consulting | Twentynine Palms, CA, United States
Sales Security Analyst
@ Everbridge | Bengaluru
Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France
@ Sopra Steria | Courbevoie, France
Third Party Cyber Risk Analyst
@ Chubb | Philippines