all InfoSec news
Adversarial Examples in Constrained Domains. (arXiv:2011.01183v3 [cs.CR] UPDATED)
Sept. 12, 2022, 1:20 a.m. | Ryan Sheatsley, Nicolas Papernot, Michael Weisman, Gunjan Verma, Patrick McDaniel
cs.CR updates on arXiv.org arxiv.org
Machine learning algorithms have been shown to be vulnerable to adversarial
manipulation through systematic modification of inputs (e.g., adversarial
examples) in domains such as image recognition. Under the default threat model,
the adversary exploits the unconstrained nature of images; each feature (pixel)
is fully under control of the adversary. However, it is not clear how these
attacks translate to constrained domains that limit which and how features can
be modified by the adversary (e.g., network intrusion detection). In this
paper, …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Security Operations Manager (f/d/m), 80-100%
@ Alpiq | Lausanne, CH
Project Manager - Cyber Security
@ Quantrics Enterprises Inc. | Philippines
Sr. Principal Application Security Engineer
@ Gen | DEU - Tettnang, Kaplaneiweg
(Senior) Security Architect Car IT/ Threat Modelling / Information Security (m/f/x)
@ Mercedes-Benz Tech Innovation | Ulm
Information System Security Officer
@ ManTech | 200AE - 375 E St SW, Washington, DC