all InfoSec news
Bayesian Learned Models Can Detect Adversarial Malware For Free
March 28, 2024, 4:11 a.m. | Bao Gia Doan, Dang Quang Nguyen, Paul Montague, Tamas Abraham, Olivier De Vel, Seyit Camtepe, Salil S. Kanhere, Ehsan Abbasnejad, Damith C. Ranasinghe
cs.CR updates on arXiv.org arxiv.org
Abstract: The vulnerability of machine learning-based malware detectors to adversarial attacks has prompted the need for robust solutions. Adversarial training is an effective method but is computationally expensive to scale up to large datasets and comes at the cost of sacrificing model performance for robustness. We hypothesize that adversarial malware exploits the low-confidence regions of models and can be identified using epistemic uncertainty of ML approaches -- epistemic uncertainty in a machine learning-based malware detector is …
adversarial adversarial attacks arxiv attacks can cost cs.cr datasets detect free large machine machine learning malware performance robustness scale scale up solutions training vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 17 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 17 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 17 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Senior Security Researcher, SIEM
@ Huntress | Remote Canada
Senior Application Security Engineer
@ Revinate | San Francisco Bay Area
Cyber Security Manager
@ American Express Global Business Travel | United States - New York - Virtual Location
Incident Responder Intern
@ Bentley Systems | Remote, PA, US
SC2024-003533 Senior Online Vulnerability Assessment Analyst (CTS) - THU 9 May
@ EMW, Inc. | Mons, Wallonia, Belgium