all InfoSec news
Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm. (arXiv:2303.15901v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks significantly threaten the robustness of deep neural
networks (DNNs). Despite the multiple defensive methods employed, they are
nevertheless vulnerable to poison attacks, where attackers meddle with the
initial training data. In order to defend DNNs against such adversarial
attacks, this work proposes a novel method that combines the defensive
distillation mechanism with a denoising autoencoder (DAE). This technique tries
to lower the sensitivity of the distilled model to poison attacks by spotting
and reconstructing poisonous adversarial inputs in …
adversarial adversarial attacks algorithm attackers attacks data defensive inputs networks neural networks novel order robustness training vulnerable work