Nov. 22, 2022, 2:20 a.m. | Mingxuan Ju, Yujie Fan, Chuxu Zhang, Yanfang Ye

cs.CR updates on arXiv.org arxiv.org

Graph Neural Networks (GNNs) have drawn significant attentions over the years
and been broadly applied to essential applications requiring solid robustness
or vigorous security standards, such as product recommendation and user
behavior modeling. Under these scenarios, exploiting GNN's vulnerabilities and
further downgrading its performance become extremely incentive for adversaries.
Previous attackers mainly focus on structural perturbations or node injections
to the existing graphs, guided by gradients from the surrogate models. Although
they deliver promising results, several limitations still exist. For …

attack board free injection networks neural networks node

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Security Manager & ISSO

@ Federal Reserve System | Minneapolis, MN

Forensic Lead

@ Arete | Hyderabad

Lead Security Risk Analyst (GRC)

@ Justworks, Inc. | New York City

Consultant Senior en Gestion de Crise Cyber et Continuité d’Activité H/F

@ Hifield | Sèvres, France