all InfoSec news
Unnoticeable Backdoor Attacks on Graph Neural Networks. (arXiv:2303.01263v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Graph Neural Networks (GNNs) have achieved promising results in various tasks
such as node classification and graph classification. Recent studies find that
GNNs are vulnerable to adversarial attacks. However, effective backdoor attacks
on graphs are still an open problem. In particular, backdoor attack poisons the
graph by attaching triggers and the target class label to a set of nodes in the
training graph. The backdoored GNNs trained on the poisoned graph will then be
misled to predict test nodes to …
adversarial adversarial attacks attack attacks backdoor backdoor attacks class classification find graphs networks neural networks node nodes problem results studies target training vulnerable