Nov. 23, 2023, 2:19 a.m. | Yu Zhou, Zihao Dong, Guofeng Zhang, Jingchen Tang

cs.CR updates on arXiv.org arxiv.org

While graph neural networks have achieved state-of-the-art performances in
many real-world tasks including graph classification and node classification,
recent works have demonstrated they are also extremely vulnerable to
adversarial attacks. Most previous works have focused on attacking node
classification networks under impractical white-box scenarios. In this work, we
will propose a non-targeted Hard Label Black Box Node Injection Attack on Graph
Neural Networks, which to the best of our knowledge, is the first of its kind.
Under this setting, more …

adversarial adversarial attacks art attack attacks black box box classification graph hard injection injection attack networks neural networks node real state under vulnerable work world

Embedded VSOC Analyst

@ Sibylline Ltd | Australia, Australia

Cloud Security Platform Engineer

@ Google | London, UK; United Kingdom

Senior Associate Cybersecurity GRC - FedRAMP

@ Workday | USA, VA, McLean

Senior Incident Response Consultant, Mandiant, Google Cloud

@ Google | Mexico; Colombia

Cyber Software Engineering, Advisor

@ Peraton | Fort Gordon, GA, United States

Cloud Security Architect (Federal)

@ Moveworks | Remote, USA