all InfoSec news
Defenses in Adversarial Machine Learning: A Survey. (arXiv:2312.08890v1 [cs.CV])
cs.CR updates on arXiv.org arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML)
systems, especially in those using deep neural networks, describing that ML
systems may produce inconsistent and incomprehensible predictions with humans
at some particular cases. This phenomenon poses a serious security threat to
the practical application of ML systems, and several advanced attack paradigms
have been developed to explore it, mainly including backdoor attacks, weight
attacks, and adversarial examples. For each individual attack paradigm, various
defense paradigms have been developed to …
advanced adversarial application cases defenses humans machine machine learning may networks neural networks predictions security security threat serious serious security survey systems threat