all InfoSec news
Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach. (arXiv:2306.00314v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Deep learning models have been used in creating various effective image
classification applications. However, they are vulnerable to adversarial
attacks that seek to misguide the models into predicting incorrect classes. Our
study of major adversarial attack models shows that they all specifically
target and exploit the neural networking structures in their designs. This
understanding makes us develop a hypothesis that most classical machine
learning models, such as Random Forest (RF), are immune to adversarial attack
models because they do not …
adversarial adversarial attacks applications attack attacks aware classification deep learning exploit machine machine learning major study system target verification vulnerable