Aug. 16, 2022, 1:20 a.m. | Jintang Li, Jie Liao, Ruofan Wu, Liang Chen, Jiawang Dan, Changhua Meng, Zibin Zheng, Weiqiang Wang

cs.CR updates on arXiv.org arxiv.org

Graph convolutional networks (GCNs) have shown to be vulnerable to small
adversarial perturbations, which becomes a severe threat and largely limits
their applications in security-critical scenarios. To mitigate such a threat,
considerable research efforts have been devoted to increasing the robustness of
GCNs against adversarial attacks. However, current approaches for defense are
typically designed for the whole graph and consider the global performance,
posing challenges in protecting important local nodes from stronger adversarial
targeted attacks. In this work, we present …

adversarial defense guard lg

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote