April 29, 2024, 4:11 a.m. | Xinghua Xue, Cheng Liu, Ying Wang, Bing Yang, Tao Luo, Lei Zhang, Huawei Li, Xiaowei Li

cs.CR updates on arXiv.org arxiv.org

arXiv:2302.10468v3 Announce Type: replace
Abstract: Vision Transformers (ViTs) that leverage self-attention mechanism have shown superior performance on many classical vision tasks compared to convolutional neural networks (CNNs) and gain increasing popularity recently. Existing ViTs works mainly optimize performance and accuracy, but ViTs reliability issues induced by soft errors in large-scale VLSI designs have generally been overlooked. In this work, we mainly study the reliability of ViTs and investigate the vulnerability from different architecture granularities ranging from models, layers, modules, and …

accuracy analysis arxiv attention cnns convolutional neural networks cs.cr error errors large mechanism networks neural networks performance reliability scale transformers

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Associate Engineer (Security Operations Centre)

@ People Profilers | Singapore, Singapore, Singapore

DevSecOps Engineer

@ Australian Payments Plus | Sydney, New South Wales, Australia

Senior Cybersecurity Specialist

@ SmartRecruiters Inc | Poland, Poland