Nov. 28, 2022, 2:10 a.m. | Hung-Jui Wang, Yu-Yu Wu, Shang-Tse Chen

cs.CR updates on arXiv.org arxiv.org

Malicious attackers can generate targeted adversarial examples by imposing
tiny noises, forcing neural networks to produce specific incorrect outputs.
With cross-model transferability, network models remain vulnerable even in
black-box settings. Recent studies have shown the effectiveness of
ensemble-based methods in generating transferable adversarial examples. To
further enhance transferability, model augmentation methods aim to produce more
networks participating in the ensemble. However, existing model augmentation
methods are only proven effective in untargeted attacks. In this work, we
propose Diversified Weight Pruning …

attack targeted attack

Security Operations Program Manager

@ Microsoft | Redmond, Washington, United States

Sr. Network Security engineer

@ NXP Semiconductors | Bengaluru (Nagavara)

DevSecOps Engineer

@ RP Pro Services | Washington, District of Columbia, United States

Consultant RSSI H/F

@ Hifield | Sèvres, France

TW Senior Test Automation Engineer (Access Control & Intrusion Systems)

@ Bosch Group | Taipei, Taiwan

Cyber Security, Senior Manager

@ Triton AI Pte Ltd | Singapore, Singapore, Singapore