all InfoSec news
Adversarial Robustness of Deep Learning-based Malware Detectors via (De)Randomized Smoothing
Feb. 26, 2024, 5:11 a.m. | Daniel Gibert, Giulio Zizzo, Quan Le, Jordi Planes
cs.CR updates on arXiv.org arxiv.org
Abstract: Deep learning-based malware detectors have been shown to be susceptible to adversarial malware examples, i.e. malware examples that have been deliberately manipulated in order to avoid detection. In light of the vulnerability of deep learning detectors to subtle input file modifications, we propose a practical defense against adversarial malware examples inspired by (de)randomized smoothing. In this work, we reduce the chances of sampling adversarial content injected by malware authors by selecting correlated subsets of bytes, …
adversarial arxiv cs.ai cs.cr deep learning detection examples file input malware modifications order robustness vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 3 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 3 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 3 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Compliance Advisor
@ SAP | Budapest, HU, 1031
DevSecOps Engineer
@ Qube Research & Technologies | London
Software Engineer, Security
@ Render | San Francisco, CA or Remote (USA & Canada)
Associate Consultant
@ Control Risks | Frankfurt, Hessen, Germany
Senior Security Engineer
@ Activision Blizzard | Work from Home - CA