June 8, 2023, 1:11 a.m. | Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, Muhyun Kim, Feng Chen, Murat Kantarcioglu, Kangkook Jee

cs.CR updates on arXiv.org arxiv.org

The black-box nature of complex Neural Network (NN)-based models has hindered
their widespread adoption in security domains due to the lack of logical
explanations and actionable follow-ups for their predictions. To enhance the
transparency and accountability of Graph Neural Network (GNN) security models
used in system provenance analysis, we propose PROVEXPLAINER, a framework for
projecting abstract GNN decision boundaries onto interpretable feature spaces.


We first replicate the decision-making process of GNNbased security models
using simpler and explainable models such as …

accountability actionable adoption analysis box detections domains features ids nature network neural network predictions provenance provenance analysis security security models system transparency transparency and accountability ups

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)