May 26, 2023, 1:19 a.m. | Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Bingbing Xu, Xueqi Cheng

cs.CR updates on arXiv.org arxiv.org

Graph neural networks (GNNs) have achieved remarkable success in various
tasks, however, their vulnerability to adversarial attacks raises concerns for
the real-world applications. Existing defense methods can resist some attacks,
but suffer unbearable performance degradation under other unknown attacks. This
is due to their reliance on either limited observed adversarial examples to
optimize (adversarial training) or specific heuristics to alter graph or model
structures (graph purification or robust aggregation). In this paper, we
propose an Invariant causal DEfense method against …

adversarial adversarial attacks applications attacks defense networks neural networks performance robustness under vulnerability world

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Oracle EBS DevSecOps Developer

@ Accenture Federal Services | Arlington, VA

Information Security GRC Specialist - Risk Program Lead

@ Western Digital | Irvine, CA, United States

Senior Cyber Operations Planner (15.09)

@ OCT Consulting, LLC | Washington, District of Columbia, United States

AI Cybersecurity Architect

@ FactSet | India, Hyderabad, DVS, SEZ-1 – Orion B4; FL 7,8,9,11 (Hyderabad - Divyasree 3)