Sept. 18, 2023, 1:10 a.m. | Hongsheng Hu, Shuo Wang, Jiamin Chang, Haonan Zhong, Ruoxi Sun, Shuang Hao, Haojin Zhu, Minhui Xue

cs.CR updates on arXiv.org arxiv.org

The right to be forgotten requires the removal or "unlearning" of a user's
data from machine learning models. However, in the context of Machine Learning
as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning
request is impractical due to the lack of training data on the service
provider's side (the server). Furthermore, approximate unlearning further
embraces a complex trade-off between utility (model performance) and privacy
(unlearning performance). In this paper, we try to explore the potential …

context data exposing machine machine learning machine learning models request right to be forgotten service services vulnerabilities

Sr. Staff Security Engineer

@ Databricks | San Francisco, California

Security Engineer

@ Nomi Health | Austin, Texas

Senior Principal Consultant, Security Architecture

@ 6point6 | Manchester, United Kingdom

Cyber Policy Advisor

@ IntelliBridge | McLean, VA, McLean, VA, US

TW Full Stack Software Engineer (Access Control & Intrusion Systems)

@ Bosch Group | Taipei, Taiwan

Cyber Software Engineer

@ Peraton | Annapolis Junction, MD, United States