Feb. 13, 2024, 5:10 a.m. | Marlon Tobaben Gauri Pradhan Yuan He Joonas J\"alk\"o Antti Honkela

cs.CR updates on arXiv.org arxiv.org

We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models.We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference. In terms of data set properties, we find a strong power law dependence between the number of examples per class in the data and the MIA vulnerability, as measured by true positive rate of the attack at a low false positive rate. …

art attack classification cs.cr cs.lg data data sets deep learning find fine-tuning focus image large law power practical privacy privacy state terms test understanding vulnerability vulnerable

Assistant Manager, IT Security

@ CIMB | Cambodia

IT Security Engineer - GRC

@ Xtremax | Bandung City, West Java, Indonesia

Senior Engineer - Application Security

@ ANZ Banking Group Limited | Quezon City, PH

Penetration Tester Manager

@ RSM | USA-IL-Chicago-30 South Wacker Drive, Suite 3300

Offensive Security Engineer, Device Wireless Connectivity

@ Google | Amsterdam, Netherlands

IT Security Analyst I

@ Mitsubishi Heavy Industries | Houston, TX, US, 77046