all InfoSec news
Machine Unlearning for Traditional Models and Large Language Models: A Short Survey
April 2, 2024, 7:12 p.m. | Yi Xu
cs.CR updates on arXiv.org arxiv.org
Abstract: With the implementation of personal data privacy regulations, the field of machine learning (ML) faces the challenge of the "right to be forgotten". Machine unlearning has emerged to address this issue, aiming to delete data and reduce its impact on models according to user requests. Despite the widespread interest in machine unlearning, comprehensive surveys on its latest advancements, especially in the field of Large Language Models (LLMs) is lacking. This survey aims to fill this …
address arxiv challenge cs.cr cs.lg data data privacy data privacy regulations delete impact implementation issue language language models large machine machine learning personal personal data personal data privacy privacy privacy regulations regulations right to be forgotten survey
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Cloud Security Analyst
@ Cloud Peritus | Bengaluru, India
Cyber Program Manager - CISO- United States – Remote
@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700
Network Security Engineer (AEGIS)
@ Peraton | Virginia Beach, VA, United States
SC2022-002065 Cyber Security Incident Responder (NS) - MON 13 May
@ EMW, Inc. | Mons, Wallonia, Belgium
Information Systems Security Engineer
@ Booz Allen Hamilton | USA, GA, Warner Robins (300 Park Pl Dr)