Web: http://arxiv.org/abs/2209.05957

Sept. 14, 2022, 1:20 a.m. | Hussain Hussain, Meng Cao, Sandipan Sikdar, Denis Helic, Elisabeth Lex, Markus Strohmaier, Roman Kern

cs.CR updates on arXiv.org arxiv.org

We present evidence for the existence and effectiveness of adversarial
attacks on graph neural networks (GNNs) that aim to degrade fairness. These
attacks can disadvantage a particular subgroup of nodes in GNN-based node
classification, where nodes of the underlying network have sensitive
attributes, such as race or gender. We conduct qualitative and experimental
analyses explaining how adversarial link injection impairs the fairness of GNN
predictions. For example, an attacker can compromise the fairness of GNN-based
node classification by injecting adversarial …

adversarial fairness injection link networks neural networks

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Director of Security Operations, CISO office

@ Okcoin | San Jose, California, United States

Systems Security Engineer

@ Synctera | Canada or US Remote

Cyberark Senior Consultant I | Remote, Canada

@ Optiv | Toronto, ON

Privacy & Cybersecurity Counsel

@ Brightspeed | Charlotte, NC, United States

Sr/Staff Threat Researcher

@ SecurityScorecard | Remote (US/Canada)

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France

SOC Analyst

@ Starling Bank | Southampton, England, United Kingdom