Aug. 16, 2023, 1:10 a.m. | Zhuoer Xu, Zhangxuan Gu, Jianping Zhang, Shiwen Cui, Changhua Meng, Weiqiang Wang

cs.CR updates on arXiv.org arxiv.org

Deep neural networks are vulnerable to adversarial examples, dictating the
imperativeness to test the model's robustness before deployment. Transfer-based
attackers craft adversarial examples against surrogate models and transfer them
to victim models deployed in the black-box situation. To enhance the
adversarial transferability, structure-based attackers adjust the
backpropagation path to avoid the attack from overfitting the surrogate model.
However, existing structure-based attackers fail to explore the convolution
module in CNNs and modify the backpropagation graph heuristically, leading to
limited effectiveness. In …

adjust adversarial attack attackers box deployment networks neural networks path robustness search test transfer victim vulnerable

QA Customer Response Engineer

@ ORBCOMM | Sterling, VA Office, Sterling, VA, US

Enterprise Security Architect

@ Booz Allen Hamilton | USA, TX, San Antonio (3133 General Hudnell Dr) Client Site

DoD SkillBridge - Systems Security Engineer (Active Duty Military Only)

@ Sierra Nevada Corporation | Dayton, OH - OH OD1

Senior Development Security Analyst (REMOTE)

@ Oracle | United States

Software Engineer - Network Security

@ Cloudflare, Inc. | Remote

Software Engineer, Cryptography Services

@ Robinhood | Toronto, ON