all InfoSec news
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability. (arXiv:2009.13243v2 [cs.CR] UPDATED)
June 2, 2022, 1:20 a.m. | Ishai Rosenberg, Shai Meir, Jonathan Berrebi, Ilay Gordon, Guillaume Sicard, Eli David
cs.CR updates on arXiv.org arxiv.org
In recent years, the topic of explainable machine learning (ML) has been
extensively researched. Up until now, this research focused on regular ML users
use-cases such as debugging a ML model. This paper takes a different posture
and show that adversaries can leverage explainable ML to bypass multi-feature
types malware classifiers. Previous adversarial attacks against such
classifiers only add new features and not modify existing ones to avoid harming
the modified malware executable's functionality. Current attacks use a single
algorithm …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Junior Cybersecurity Analyst - 3346195
@ TCG | 725 17th St NW, Washington, DC, USA
Cyber Intelligence, Senior Advisor
@ Peraton | Chantilly, VA, United States
Consultant Cybersécurité H/F - Innovative Tech
@ Devoteam | Marseille, France
Manager, Internal Audit (GIA Cyber)
@ Standard Bank Group | Johannesburg, South Africa
Staff DevSecOps Engineer
@ Raft | San Antonio, TX (Local Remote)
Domain Leader Cybersecurity
@ Alstom | Bengaluru, KA, IN