June 14, 2023, 1:10 a.m. | Dipkamal Bhusal, Rosalyn Shin, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi

cs.CR updates on arXiv.org arxiv.org

Interpretability, trustworthiness, and usability are key considerations in
high-stake security applications, especially when utilizing deep learning
models. While these models are known for their high accuracy, they behave as
black boxes in which identifying important features and factors that led to a
classification or a prediction is difficult. This can lead to uncertainty and
distrust, especially when an incorrect prediction results in severe
consequences. Thus, explanation methods aim to provide insights into the inner
working of deep learning models. However, …

accuracy analytics applications classification deep learning features high important key led modeling prediction security security analytics usability

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens