all InfoSec news
A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services. (arXiv:2309.08230v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
The right to be forgotten requires the removal or "unlearning" of a user's
data from machine learning models. However, in the context of Machine Learning
as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning
request is impractical due to the lack of training data on the service
provider's side (the server). Furthermore, approximate unlearning further
embraces a complex trade-off between utility (model performance) and privacy
(unlearning performance). In this paper, we try to explore the potential …
context data exposing machine machine learning machine learning models request right to be forgotten service services vulnerabilities