all InfoSec news
GradMDM: Adversarial Attack on Dynamic Networks. (arXiv:2304.06724v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Dynamic neural networks can greatly reduce computation redundancy without
compromising accuracy by adapting their structures based on the input. In this
paper, we explore the robustness of dynamic neural networks against
energy-oriented attacks targeted at reducing their efficiency. Specifically, we
attack dynamic models with our novel algorithm GradMDM. GradMDM is a technique
that adjusts the direction and the magnitude of the gradients to effectively
find a small perturbation for each input, that will activate more computational
units of dynamic models …
accuracy adversarial algorithm attack attacks computation computational datasets dynamic effectively efficiency energy find input magnitude networks neural networks novel redundancy robustness