Sept. 21, 2023, 1:10 a.m. | Tian Hui, Farhad Farokhi, Olga Ohrimenko

cs.CR updates on arXiv.org arxiv.org

In this paper we consider the setting where machine learning models are
retrained on updated datasets in order to incorporate the most up-to-date
information or reflect distribution shifts. We investigate whether one can
infer information about these updates in the training data (e.g., changes to
attribute values of records). Here, the adversary has access to snapshots of
the machine learning model before and after the change in the dataset occurs.
Contrary to the existing literature, we assume that an attribute …

data datasets distribution information information leakage machine machine learning machine learning models order records shifts training training data updates up-to-date

Senior Security Engineer - Detection and Response

@ Fastly, Inc. | US (Remote)

Application Security Engineer

@ Solidigm | Zapopan, Mexico

Defensive Cyber Operations Engineer-Mid

@ ISYS Technologies | Aurora, CO, United States

Manager, Information Security GRC

@ OneTrust | Atlanta, Georgia

Senior Information Security Analyst | IAM

@ EBANX | Curitiba or São Paulo

Senior Information Security Engineer, Cloud Vulnerability Research

@ Google | New York City, USA; New York, USA