May 3, 2023, 1:10 a.m. | Juanjuan Weng, Zhiming Luo, Dazhen Lin, Shaozi Li, Zhun Zhong

cs.CR updates on arXiv.org arxiv.org

Recent research has shown that Deep Neural Networks (DNNs) are highly
vulnerable to adversarial samples, which are highly transferable and can be
used to attack other unknown black-box models. To improve the transferability
of adversarial samples, several feature-based adversarial attack methods have
been proposed to disrupt neuron activation in middle layers. However, current
state-of-the-art feature-based attack methods typically require additional
computation costs for estimating the importance of neurons. To address this
challenge, we propose a Singular Value Decomposition (SVD)-based feature-level …

adversarial attack box disrupt networks neural networks neuron research vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Dir-Information Security - Cyber Analytics

@ Marriott International | Bethesda, MD, United States

Security Engineer - Security Operations

@ TravelPerk | Barcelona, Barcelona, Spain

Information Security Mgmt- Risk Assessor

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SAP CO Consultant

@ Atos | Istanbul, TR