all InfoSec news
On the Privacy Risks of Deploying Recurrent Neural Networks in Machine Learning Models. (arXiv:2110.03054v3 [cs.CR] UPDATED)
June 16, 2022, 1:20 a.m. | Yunhao Yang, Parham Gohari, Ufuk Topcu
cs.CR updates on arXiv.org arxiv.org
We study the privacy implications of training recurrent neural networks
(RNNs) with sensitive training datasets. Considering membership inference
attacks (MIAs), which aim to infer whether or not specific data records have
been used in training a given machine learning model, we provide empirical
evidence that a neural network's architecture impacts its vulnerability to
MIAs. In particular, we demonstrate that RNNs are subject to a higher attack
accuracy than feed-forward neural network (FFNN) counterparts. Additionally, we
study the effectiveness of two …
machine machine learning machine learning models networks privacy
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Information Security Manager & ISSO
@ Federal Reserve System | Minneapolis, MN
Forensic Lead
@ Arete | Hyderabad
Lead Security Risk Analyst (GRC)
@ Justworks, Inc. | New York City
Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F
@ Hifield | Sèvres, France