Oct. 26, 2022, 1:23 a.m. | Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, Guohan Huang

cs.CR updates on arXiv.org arxiv.org

Graph neural network (GNN) with a powerful representation capability has been
widely applied to various areas, such as biological gene prediction, social
recommendation, etc. Recent works have exposed that GNN is vulnerable to the
backdoor attack, i.e., models trained with maliciously crafted training samples
are easily fooled by patched samples. Most of the proposed studies launch the
backdoor attack using a trigger that either is the randomly generated subgraph
(e.g., erd\H{o}s-r\'enyi backdoor) for less computational burden, or the
gradient-based generative …

attack backdoor networks neural networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Intermediate Security Engineer, (Incident Response, Trust & Safety)

@ GitLab | Remote, US

Journeyman Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Project Manager II - Compliance

@ Critical Path Institute | Tucson, AZ, USA

Junior System Engineer (m/w/d) Cyber Security 1

@ Deutsche Telekom | Leipzig, Deutschland