Nov. 11, 2022, 2:20 a.m. | Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu

cs.CR updates on arXiv.org arxiv.org

Graph neural networks (GNNs) are susceptible to privacy inference attacks
(PIAs), given their ability to learn joint representation from features and
edges among nodes in graph data. To prevent privacy leakages in GNNs, we
propose a novel heterogeneous randomized response (HeteroRR) mechanism to
protect nodes' features and edges against PIAs under differential privacy (DP)
guarantees without an undue cost of data and model utility in training GNNs.
Our idea is to balance the importance and sensitivity of nodes' features and …

differential privacy networks neural networks privacy response

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Staff DFIR Investigator

@ SentinelOne | United States - Remote

Senior Consultant.e (H/F) - Product & Industrial Cybersecurity

@ Wavestone | Puteaux, France

Information Security Analyst

@ StarCompliance | York, United Kingdom, Hybrid

Senior Cyber Security Analyst (IAM)

@ New York Power Authority | White Plains, US