all InfoSec news
Soft Error Reliability Analysis of Vision Transformers
April 29, 2024, 4:11 a.m. | Xinghua Xue, Cheng Liu, Ying Wang, Bing Yang, Tao Luo, Lei Zhang, Huawei Li, Xiaowei Li
cs.CR updates on arXiv.org arxiv.org
Abstract: Vision Transformers (ViTs) that leverage self-attention mechanism have shown superior performance on many classical vision tasks compared to convolutional neural networks (CNNs) and gain increasing popularity recently. Existing ViTs works mainly optimize performance and accuracy, but ViTs reliability issues induced by soft errors in large-scale VLSI designs have generally been overlooked. In this work, we mainly study the reliability of ViTs and investigate the vulnerability from different architecture granularities ranging from models, layers, modules, and …
accuracy analysis arxiv attention cnns convolutional neural networks cs.cr error errors large mechanism networks neural networks performance reliability scale transformers
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Associate Engineer (Security Operations Centre)
@ People Profilers | Singapore, Singapore, Singapore
DevSecOps Engineer
@ Australian Payments Plus | Sydney, New South Wales, Australia
Senior Cybersecurity Specialist
@ SmartRecruiters Inc | Poland, Poland