April 14, 2022, 1:20 a.m. | Ke He, Dan Dongseong Kim, Jing Sun, Jeong Do Yoo, Young Hun Lee, Huy Kang Kim

cs.CR updates on arXiv.org arxiv.org

Due to its high expressiveness and speed, Deep Learning (DL) has become an
increasingly popular choice as the detection algorithm for Network-based
Intrusion Detection Systems (NIDSes). Unfortunately, DL algorithms are
vulnerable to adversarial examples that inject imperceptible modifications to
the input and cause the DL algorithm to misclassify the input. Existing
adversarial attacks in the NIDS domain often manipulate the traffic features
directly, which hold no practical significance because traffic features cannot
be replayed in a real network. It remains …

adversarial attacks box framework

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Associate DevSecOps Engineer

@ LinQuest | Los Angeles, California, United States

DORA Compliance Program Manager

@ Resillion | Brussels, Belgium

Head of Workplace Risk and Compliance

@ Wise | London, United Kingdom