all InfoSec news
Reliability Analysis of Vision Transformers. (arXiv:2302.10468v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Vision Transformers (ViTs) that leverage self-attention mechanism have shown
superior performance on many classical vision tasks compared to convolutional
neural networks (CNNs) and gain increasing popularity recently. Existing ViTs
works mainly optimize performance and accuracy, but ViTs reliability issues
induced by hardware faults in large-scale VLSI designs have generally been
overlooked. In this work, we mainly study the reliability of ViTs and
investigate the vulnerability from different architecture granularities ranging
from models, layers, modules, and patches for the first time. …
accuracy analysis architecture attention cnns hardware large networks neural networks performance reliability scale study transformers vulnerability work