May 26, 2023, 1:19 a.m. | Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Bingbing Xu, Xueqi Cheng

cs.CR updates on

Graph neural networks (GNNs) have achieved remarkable success in various
tasks, however, their vulnerability to adversarial attacks raises concerns for
the real-world applications. Existing defense methods can resist some attacks,
but suffer unbearable performance degradation under other unknown attacks. This
is due to their reliance on either limited observed adversarial examples to
optimize (adversarial training) or specific heuristics to alter graph or model
structures (graph purification or robust aggregation). In this paper, we
propose an Invariant causal DEfense method against …

adversarial adversarial attacks applications attacks defense networks neural networks performance robustness under vulnerability world

More from / cs.CR updates on

Toronto Transit Commission (TTC) - Chief Information Security Officer (CISO)

@ BIPOC Executive Search Inc. | Toronto, Ontario, Canada

Unit Manager for Cyber Security Culture & Competence

@ H&M Group | Stockholm, Sweden

Junior Security Engineer

@ Pipedrive | Tallinn, Estonia

Splunk Engineer (TS/SCI)

@ GuidePoint Security LLC | Huntsville, AL

DevSecOps Engineer, SRE (Top Secret) - 1537

@ Reinventing Geospatial (RGi) | Herndon, VA

Governance, Risk and Compliance (GRC) Lead

@ Leidos | Brisbane, Australia