Nov. 5, 2023, 6:10 a.m. | Iyiola E. Olatunji, Thorben Funke, Megha Khosla

cs.CR updates on arXiv.org arxiv.org

With the increasing popularity of graph neural networks (GNNs) in several
sensitive applications like healthcare and medicine, concerns have been raised
over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to
privacy attacks, such as membership inference attacks, even if only black-box
access to the trained model is granted. We propose PrivGNN, a
privacy-preserving framework for releasing GNN models in a centralized setting.
Assuming an access to a public unlabeled graph, PrivGNN provides a framework to
release …

access applications attacks box differential privacy graph healthcare medicine networks neural networks privacy sensitive vulnerable

Lead Security Specialist

@ Fujifilm | Holly Springs, NC, United States

Security Operations Centre Analyst

@ Deliveroo | Hyderabad, India (Main Office)

CISOC Analyst

@ KCB Group | Kenya

Lead Security Engineer – Red Team/Offensive Security

@ FICO | Work from Home, United States

Cloud Security SME

@ Maveris | Washington, District of Columbia, United States - Remote

SOC Analyst (m/w/d)

@ Bausparkasse Schwäbisch Hall | Schwäbisch Hall, DE