Feb. 22, 2023, 2:10 a.m. | Xinghua Xue, Cheng Liu, Ying Wang, Bing Yang, Tao Luo, Lei Zhang, Huawei Li, Xiaowei Li

cs.CR updates on arXiv.org arxiv.org

Vision Transformers (ViTs) that leverage self-attention mechanism have shown
superior performance on many classical vision tasks compared to convolutional
neural networks (CNNs) and gain increasing popularity recently. Existing ViTs
works mainly optimize performance and accuracy, but ViTs reliability issues
induced by hardware faults in large-scale VLSI designs have generally been
overlooked. In this work, we mainly study the reliability of ViTs and
investigate the vulnerability from different architecture granularities ranging
from models, layers, modules, and patches for the first time. …

accuracy analysis architecture attention cnns hardware large networks neural networks performance reliability scale study transformers vulnerability work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote