all InfoSec news
State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems. (arXiv:2303.12249v1 [cs.CV])
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks can mislead deep learning models to make false
predictions by implanting small perturbations to the original input that are
imperceptible to the human eye, which poses a huge security threat to the
computer vision systems based on deep learning. Physical adversarial attacks,
which is more realistic, as the perturbation is introduced to the input before
it is being captured and converted to a binary image inside the vision system,
when compared to digital adversarial attacks. In this paper, …
adversarial adversarial attacks art attacks binary computer computer vision deep learning human input physical predictions security security threat state systems threat