June 16, 2022, 1:20 a.m. | Yunhao Yang, Parham Gohari, Ufuk Topcu

cs.CR updates on arXiv.org arxiv.org

We study the privacy implications of training recurrent neural networks
(RNNs) with sensitive training datasets. Considering membership inference
attacks (MIAs), which aim to infer whether or not specific data records have
been used in training a given machine learning model, we provide empirical
evidence that a neural network's architecture impacts its vulnerability to
MIAs. In particular, we demonstrate that RNNs are subject to a higher attack
accuracy than feed-forward neural network (FFNN) counterparts. Additionally, we
study the effectiveness of two …

machine machine learning machine learning models networks privacy

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Security Manager & ISSO

@ Federal Reserve System | Minneapolis, MN

Forensic Lead

@ Arete | Hyderabad

Lead Security Risk Analyst (GRC)

@ Justworks, Inc. | New York City

Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F

@ Hifield | Sèvres, France