Feb. 5, 2024, 8:10 p.m. | Arezoo Rajabi Reeya Pimple Aiswarya Janardhanan Surudhi Asokraj Bhaskar Ramasubramanian Radha Poovendran

cs.CR updates on arXiv.org arxiv.org

Transfer learning (TL) has been demonstrated to improve DNN model performance when faced with a scarcity of training samples. However, the suitability of TL as a solution to reduce vulnerability of overfitted DNNs to privacy attacks is unexplored. A class of privacy attacks called membership inference attacks (MIAs) aim to determine whether a given sample belongs to the training dataset (member) or not (nonmember). We introduce Double-Dip, a systematic empirical study investigating the use of TL (Stage-1) combined with randomization …

aim attacks called class cs.ai cs.cr cs.lg performance privacy randomization scarcity solution training transfer vulnerability

Deputy Chief Information Security Officer

@ United States Holocaust Memorial Museum | Washington, DC

Humbly Confident Security Lead

@ YNAB | Remote

Information Technology Specialist II: Information Security Engineer

@ WBCP, Inc. | Pasadena, CA.

Director of the Air Force Cyber Technical Center of Excellence (CyTCoE)

@ Air Force Institute of Technology | Dayton, OH, USA

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

IT-Security Analyst "Managed Cloud" Fokus MS-Sentinel (m/w/d)*

@ GISA GmbH | Halle, DE