Feb. 5, 2024, 8:10 p.m. | Arezoo Rajabi Reeya Pimple Aiswarya Janardhanan Surudhi Asokraj Bhaskar Ramasubramanian Radha Poovendran

cs.CR updates on arXiv.org arxiv.org

Transfer learning (TL) has been demonstrated to improve DNN model performance when faced with a scarcity of training samples. However, the suitability of TL as a solution to reduce vulnerability of overfitted DNNs to privacy attacks is unexplored. A class of privacy attacks called membership inference attacks (MIAs) aim to determine whether a given sample belongs to the training dataset (member) or not (nonmember). We introduce Double-Dip, a systematic empirical study investigating the use of TL (Stage-1) combined with randomization …

aim attacks called class cs.ai cs.cr cs.lg performance privacy randomization scarcity solution training transfer vulnerability

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer

@ Commit | San Francisco

Trainee (m/w/d) Security Engineering CTO Taskforce Team

@ CHECK24 | Berlin, Germany

Security Engineer

@ EY | Nicosia, CY, 1087

Information System Security Officer (ISSO) Level 3-COMM Job#455

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Application Security Engineer

@ Wise | London, United Kingdom