May 22, 2024, 4:11 a.m. | Junjie Yang, Tianlong Chen, Xuxi Chen, Zhangyang Wang, Yingbin Liang

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.01260v2 Announce Type: replace-cross
Abstract: Neural networks have demonstrated success in various domains, yet their performance can be significantly degraded by even a small input perturbation. Consequently, the construction of such perturbations, known as adversarial attacks, has gained significant attention, many of which fall within "white-box" scenarios where we have full access to the neural network. Existing attack algorithms, such as the projected gradient descent (PGD), commonly take the sign function on the raw gradient before updating adversarial inputs, thereby …

arxiv attack cs.cr cs.lg function sign stat.ml

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Security Compliance Manager

@ Aptiv | USA Boston Software Office 100 Northern - Eng

Senior Radar Threat Analyst | Secret clearance

@ Northern Trust | USA CA Point Mugu - 575 I Ave, Bldg 3015 (CAC212)

Space Information Systems Security Engineer (ISSE)

@ Parsons Corporation | USA VA Chantilly (Client Site)

Information Systems Security Manager -Journeyman

@ Parsons Corporation | USA CO Colorado Springs (5450 Tech Center Drive)

Information Systems Security Officer (ISSO) II

@ Northern Trust | USA CA Riverside - Customer Proprietary (CAC225)