June 13, 2022, 1:20 a.m. | Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr

cs.CR updates on arXiv.org arxiv.org

Graph Neural Networks (GNNs) have shown great power in learning node
representations on graphs. However, they may inherit historical prejudices from
training data, leading to discriminatory bias in predictions. Although some
work has developed fair GNNs, most of them directly borrow fair representation
learning techniques from non-graph domains without considering the potential
problem of sensitive attribute leakage caused by feature propagation in GNNs.
However, we empirically observe that feature propagation could vary the
correlation of previously innocuous non-sensitive features to …

fairness lg networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cybersecurity Consultant- Governance, Risk, and Compliance team

@ EY | Tel Aviv, IL, 6706703

Professional Services Consultant

@ Zscaler | Escazú, Costa Rica

IT Security Analyst

@ Briggs & Stratton | Wauwatosa, WI, US, 53222

Cloud DevSecOps Engineer - Team Lead

@ Motorola Solutions | Krakow, Poland