all InfoSec news
Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage. (arXiv:2206.03426v2 [cs.LG] UPDATED)
June 13, 2022, 1:20 a.m. | Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr
cs.CR updates on arXiv.org arxiv.org
Graph Neural Networks (GNNs) have shown great power in learning node
representations on graphs. However, they may inherit historical prejudices from
training data, leading to discriminatory bias in predictions. Although some
work has developed fair GNNs, most of them directly borrow fair representation
learning techniques from non-graph domains without considering the potential
problem of sensitive attribute leakage caused by feature propagation in GNNs.
However, we empirically observe that feature propagation could vary the
correlation of previously innocuous non-sensitive features to …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cybersecurity Consultant- Governance, Risk, and Compliance team
@ EY | Tel Aviv, IL, 6706703
Professional Services Consultant
@ Zscaler | Escazú, Costa Rica
IT Security Analyst
@ Briggs & Stratton | Wauwatosa, WI, US, 53222
Cloud DevSecOps Engineer - Team Lead
@ Motorola Solutions | Krakow, Poland