all InfoSec news
Task and Model Agnostic Adversarial Attack on Graph Neural Networks. (arXiv:2112.13267v2 [cs.LG] UPDATED)
Dec. 6, 2022, 2:10 a.m. | Kartik Sharma, Samidha Verma, Sourav Medya, Sayan Ranu, Arnab Bhattacharya
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks on Graph Neural Networks (GNNs) reveal their security
vulnerabilities, limiting their adoption in safety-critical applications.
However, existing attack strategies rely on the knowledge of either the GNN
model being used or the predictive task being attacked. Is this knowledge
necessary? For example, a graph may be used for multiple downstream tasks
unknown to a practical attacker. It is thus important to test the vulnerability
of GNNs to adversarial perturbations in a model and task agnostic setting. In
this …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Architect - Hardware
@ Intel | IND - Bengaluru
Elastic Consultant
@ Elastic | Spain
OT Cybersecurity Specialist
@ Emerson | Abu Dhabi, United Arab Emirates
Security Operations Program Manager
@ Kaseya | Miami, Florida, United States
Senior Security Operations Engineer
@ Revinate | Vancouver