Feb. 13, 2023, 2:18 a.m. | Qizhang Li, Yiwen Guo, Wangmeng Zuo, Hao Chen

cs.CR updates on arXiv.org arxiv.org

The vulnerability of deep neural networks (DNNs) to adversarial examples has
attracted great attention in the machine learning community. The problem is
related to non-flatness and non-smoothness of normally obtained loss
landscapes. Training augmented with adversarial examples (a.k.a., adversarial
training) is considered as an effective remedy. In this paper, we highlight
that some collaborative examples, nearly perceptually indistinguishable from
both adversarial and benign examples yet show extremely lower prediction loss,
can be utilized to enhance adversarial training. A novel method …

adversarial attention community great loss machine machine learning networks neural networks non prediction problem remedy robustness training vulnerability

Security Architect

@ Alter Solutions | Lisboa, Portugal

Information Security Program Manager

@ Fisher Investments | Tampa, FL, United States

Digital Security Infrastructure Manager

@ Wizz Air | Budapest, HU, H-1103

Sr. Solution Consultant

@ Highspot | Sydney

Cyber Security Analyst III

@ Love's Travel Stops | Oklahoma City, OK, US, 73120

Lead Security Engineer

@ JPMorgan Chase & Co. | Tampa, FL, United States