all InfoSec news
LMEraser: Large Model Unlearning through Adaptive Prompt Tuning
April 18, 2024, 4:11 a.m. | Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
cs.CR updates on arXiv.org arxiv.org
Abstract: To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for \textbf{L}arge \textbf{M}odels, called \textbf{LM}Eraser. Existing unlearning research suffers from entangled training data and complex model architectures, incurring extremely high computational costs for large models. LMEraser takes a divide-and-conquer strategy with a prompt tuning architecture to isolate data influence. The training dataset is partitioned into public and private datasets. Public data are used to train …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Offensive Security Engineer
@ Ivanti | United States, Remote
Senior Security Engineer I
@ Samsara | Remote - US
Senior Principal Information System Security Engineer
@ Chameleon Consulting Group | Herndon, VA
Junior Detections Engineer
@ Kandji | San Francisco
Data Security Engineer/ Architect - Remote United States
@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700