May 19, 2023, 1:10 a.m. | Chong Yu, Tao Chen, Zhongxue Gan

cs.CR updates on arXiv.org arxiv.org

Adversarial attack is commonly regarded as a huge threat to neural networks
because of misleading behavior. This paper presents an opposite perspective:
adversarial attacks can be harnessed to improve neural models if amended
correctly. Unlike traditional adversarial defense or adversarial training
schemes that aim to improve the adversarial robustness, the proposed
adversarial amendment (AdvAmd) method aims to improve the original accuracy
level of neural models on benign samples. We thoroughly analyze the
distribution mismatch between the benign and adversarial samples. …

adversarial adversarial attacks aim attack attacks defense networks neural networks perspective threat training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium