all InfoSec news
Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness. (arXiv:2303.17720v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Neural networks have been proven to be both highly effective within computer
vision, and highly vulnerable to adversarial attacks. Consequently, as the use
of neural networks increases due to their unrivaled performance, so too does
the threat posed by adversarial attacks. In this work, we build towards
addressing the challenge of adversarial robustness by exploring the
relationship between the mini-batch size used during adversarial sample
generation and the strength of the adversarial samples produced. We demonstrate
that an increase in …
adversarial adversarial attacks attacks batch build challenge computer computer vision may networks neural networks performance relationship results robustness size strength threat vulnerable work