Feb. 26, 2024, 5:11 a.m. | Daniel Gibert, Giulio Zizzo, Quan Le, Jordi Planes

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.15267v1 Announce Type: new
Abstract: Deep learning-based malware detectors have been shown to be susceptible to adversarial malware examples, i.e. malware examples that have been deliberately manipulated in order to avoid detection. In light of the vulnerability of deep learning detectors to subtle input file modifications, we propose a practical defense against adversarial malware examples inspired by (de)randomized smoothing. In this work, we reduce the chances of sampling adversarial content injected by malware authors by selecting correlated subsets of bytes, …

adversarial arxiv cs.ai cs.cr deep learning detection examples file input malware modifications order robustness vulnerability

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Compliance Advisor

@ SAP | Budapest, HU, 1031

DevSecOps Engineer

@ Qube Research & Technologies | London

Software Engineer, Security

@ Render | San Francisco, CA or Remote (USA & Canada)

Associate Consultant

@ Control Risks | Frankfurt, Hessen, Germany

Senior Security Engineer

@ Activision Blizzard | Work from Home - CA