June 8, 2023, 1:11 a.m. | Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, Muhyun Kim, Feng Chen, Murat Kantarcioglu, Kangkook Jee

cs.CR updates on arXiv.org arxiv.org

The black-box nature of complex Neural Network (NN)-based models has hindered
their widespread adoption in security domains due to the lack of logical
explanations and actionable follow-ups for their predictions. To enhance the
transparency and accountability of Graph Neural Network (GNN) security models
used in system provenance analysis, we propose PROVEXPLAINER, a framework for
projecting abstract GNN decision boundaries onto interpretable feature spaces.


We first replicate the decision-making process of GNNbased security models
using simpler and explainable models such as …

accountability actionable adoption analysis box detections domains features ids nature network neural network predictions provenance provenance analysis security security models system transparency transparency and accountability ups

Technical Senior Manager, SecOps | Remote US

@ Coalfire | United States

Global Cybersecurity Governance Analyst

@ UL Solutions | United States

Security Engineer II, AWS Offensive Security

@ Amazon.com | US, WA, Virtual Location - Washington

Senior Cyber Threat Intelligence Analyst

@ Sainsbury's | Coventry, West Midlands, United Kingdom

Embedded Global Intelligence and Threat Monitoring Analyst

@ Sibylline Ltd | Austin, Texas, United States

Senior Security Engineer

@ Curai Health | Remote