Aug. 11, 2023, 6:11 a.m. | Yiling He, Jian Lou, Zhan Qin, Kui Ren

cs.CR updates on arXiv.org arxiv.org

Deep learning classifiers achieve state-of-the-art performance in various
risk detection applications. They explore rich semantic representations and are
supposed to automatically discover risk behaviors. However, due to the lack of
transparency, the behavioral semantics cannot be conveyed to downstream
security experts to reduce their heavy workload in security analysis. Although
feature attribution (FA) methods can be used to explain deep learning, the
underlying classifier is still blind to what behavior is suspicious, and the
generated explanation cannot adapt to downstream …

analysis applications art attribution deep learning detection discover experts feature performance risk security security analysis security experts state transparency workload

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368

@ Allen Integrated Solutions | Chantilly, Virginia, United States

Consultant Sécurité SI H/F Gouvernance - Risques - Conformité

@ Hifield | Sèvres, France

Infrastructure Consultant

@ Telefonica Tech | Belfast, United Kingdom