March 24, 2023, 1:10 a.m. | Gaith Rjoub, Jamal Bentahar, Omar Abdel Wahab, Rabeb Mizouni, Alyssa Song, Robin Cohen, Hadi Otrok, Azzam Mourad

cs.CR updates on arXiv.org arxiv.org

The black-box nature of artificial intelligence (AI) models has been the
source of many concerns in their use for critical applications. Explainable
Artificial Intelligence (XAI) is a rapidly growing research field that aims to
create machine learning models that can provide clear and interpretable
explanations for their decisions and actions. In the field of network
cybersecurity, XAI has the potential to revolutionize the way we approach
network security by enabling us to better understand the behavior of cyber
threats and …

actions applications artificial artificial intelligence box critical cybersecurity intelligence machine machine learning machine learning models nature network network security research security survey xai

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Computer and Forensics Investigator

@ ManTech | 221BQ - Cstmr Site,Springfield,VA

Senior Security Analyst

@ Oracle | United States

Associate Vulnerability Management Specialist

@ Diebold Nixdorf | Hyderabad, Telangana, India