Nov. 23, 2022, 2:20 a.m. | Wenqi Fan, Wei Jin, Xiaorui Liu, Han Xu, Xianfeng Tang, Suhang Wang, Qing Li, Jiliang Tang, Jianping Wang, Charu Aggarwal

cs.CR updates on arXiv.org arxiv.org

Graph Neural Networks (GNNs) have boosted the performance for many
graph-related tasks. Despite the great success, recent studies have shown that
GNNs are highly vulnerable to adversarial attacks, where adversaries can
mislead the GNNs' prediction by modifying graphs. On the other hand, the
explanation of GNNs (GNNExplainer) provides a better understanding of a trained
GNN model by generating a small subgraph and features that are most influential
for its prediction. In this paper, we first perform empirical studies to
validate …

network neural network

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Solution Architect

@ Civica | London, England, United Kingdom

Information Security Officer (80-100%)

@ SIX Group | Zurich, CH

Cloud Information Systems Security Engineer

@ Analytic Solutions Group | Chantilly, Virginia, United States

SRE Engineer & Security Software Administrator

@ Talan | Mexico City, Spain