June 19, 2023, 1:10 a.m. | Daniel Gibert, Jordi Planes, Quan Le, Giulio Zizzo

cs.CR updates on arXiv.org arxiv.org

Malware detectors based on machine learning (ML) have been shown to be
susceptible to adversarial malware examples. However, current methods to
generate adversarial malware examples still have their limits. They either rely
on detailed model information (gradient-based attacks), or on detailed outputs
of the model - such as class probabilities (score-based attacks), neither of
which are available in real-world scenarios. Alternatively, adversarial
examples might be crafted using only the label assigned by the detector
(label-based attack) to train a substitute …

adversarial attacks current evasion free generative generative adversarial networks information machine machine learning malware networks query

Sr Security Engineer - Colombia

@ Nubank | Colombia, Bogota

Security Engineer, Investigations - i3

@ Meta | Menlo Park, CA | Washington, DC | Remote, US

Cyber Security Engineer

@ ASSYSTEM | Bridgwater, United Kingdom

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States