all InfoSec news
Improving Adversarial Transferability via Neuron Attribution-Based Attacks. (arXiv:2204.00008v1 [cs.LG])
April 4, 2022, 1:20 a.m. | Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, Michael R. Lyu
cs.CR updates on arXiv.org arxiv.org
Deep neural networks (DNNs) are known to be vulnerable to adversarial
examples. It is thus imperative to devise effective attack algorithms to
identify the deficiencies of DNNs beforehand in security-sensitive
applications. To efficiently tackle the black-box setting where the target
model's particulars are unknown, feature-level transfer-based attacks propose
to contaminate the intermediate feature outputs of local models, and then
directly employ the crafted adversarial samples to attack the target model. Due
to the transferability of features, feature-level attacks have shown …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Cyber Security Cloud Solution Architect
@ Microsoft | London, London, United Kingdom
Compliance Program Analyst
@ SailPoint | United States
Software Engineer III, Infrastructure, Google Cloud Security and Privacy
@ Google | Sunnyvale, CA, USA
Cryptography Expert
@ Raiffeisen Bank Ukraine | Kyiv, Kyiv city, Ukraine
Senior Cyber Intelligence Planner (15.09)
@ OCT Consulting, LLC | Washington, District of Columbia, United States