March 3, 2023, 2:10 a.m. | Enyan Dai, Minhua Lin, Xiang Zhang, Suhang Wang

cs.CR updates on arXiv.org arxiv.org

Graph Neural Networks (GNNs) have achieved promising results in various tasks
such as node classification and graph classification. Recent studies find that
GNNs are vulnerable to adversarial attacks. However, effective backdoor attacks
on graphs are still an open problem. In particular, backdoor attack poisons the
graph by attaching triggers and the target class label to a set of nodes in the
training graph. The backdoored GNNs trained on the poisoned graph will then be
misled to predict test nodes to …

adversarial adversarial attacks attack attacks backdoor backdoor attacks class classification find graphs networks neural networks node nodes problem results studies target training vulnerable

XDR Detection Engineer

@ SentinelOne | Italy

Security Engineer L2

@ NTT DATA | A Coruña, Spain

Cyber Security Assurance Manager

@ Babcock | Portsmouth, GB, PO6 3EN

Senior Threat Intelligence Researcher

@ CloudSEK | Bengaluru, Karnataka, India

Cybersecurity Analyst 1

@ Spry Methods | Washington, DC (Hybrid)

Security Infrastructure DevOps Engineering Manager

@ Apple | Austin, Texas, United States