all InfoSec news
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors. (arXiv:2205.13618v2 [cs.CV] UPDATED)
Sept. 13, 2022, 1:20 a.m. | Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks against deep learning-based object detectors have been
studied extensively in the past few years. Most of the attacks proposed have
targeted the model's integrity (i.e., caused the model to make incorrect
predictions), while adversarial attacks targeting the model's availability, a
critical aspect in safety-critical domains such as autonomous driving, have not
yet been explored by the machine learning research community. In this paper, we
propose a novel attack that negatively affects the decision latency of an
end-to-end object …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Azure Security Architect
@ First Quality | Remote US - Eastern or Central Timezone
Threat Intelligence Analyst
@ Atos | Remote Home, HOME (England & Wales), GB, Remote Hom
Alternance (F/H) Hardening, migration cloud et containerisation d'un application windows
@ Alstom | Villeurbanne, FR
Security Specialist / Analist (CIT)
@ Lely | Maassluis, Netherlands