all InfoSec news
Machine Unlearning of Pre-trained Large Language Models
Feb. 26, 2024, 5:11 a.m. | Jin Yao, Eli Chien, Minxin Du, Xinyao Niu, Tianhao Wang, Zezhou Cheng, Xiang Yue
cs.CR updates on arXiv.org arxiv.org
Abstract: This study investigates the concept of the `right to be forgotten' within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models--a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning …
area arxiv concept context cs.ai cs.cl cs.cr cs.lg focus framework language language models large llms machine research right to be forgotten solution study under
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Sr. Staff Security Engineer
@ Databricks | San Francisco, California
Security Engineer
@ Nomi Health | Austin, Texas
Senior Principal Consultant, Security Architecture
@ 6point6 | Manchester, United Kingdom
Cyber Policy Advisor
@ IntelliBridge | McLean, VA, McLean, VA, US
TW Full Stack Software Engineer (Access Control & Intrusion Systems)
@ Bosch Group | Taipei, Taiwan
Cyber Software Engineer
@ Peraton | Annapolis Junction, MD, United States