all InfoSec news
Adversarial Attacks Against Uncertainty Quantification. (arXiv:2309.10586v1 [cs.CV])
cs.CR updates on arXiv.org arxiv.org
Machine-learning models can be fooled by adversarial examples, i.e.,
carefully-crafted input perturbations that force models to output wrong
predictions. While uncertainty quantification has been recently proposed to
detect adversarial inputs, under the assumption that such attacks exhibit a
higher prediction uncertainty than pristine data, it has been shown that
adaptive attacks specifically aimed at reducing also the uncertainty estimate
can easily bypass this defense mechanism. In this work, we focus on a different
adversarial scenario in which the attacker is …
adversarial adversarial attacks attacks data detect higher input inputs machine prediction predictions quantification uncertainty under wrong