Oct. 12, 2022, 1:20 a.m. | Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

cs.CR updates on arXiv.org arxiv.org

Black-box attacks can generate adversarial examples without accessing the
parameters of target model, largely exacerbating the threats of deployed deep
neural networks (DNNs). However, previous works state that black-box attacks
fail to mislead target models when their training data and outputs are
inaccessible. In this work, we argue that black-box attacks can pose practical
attacks in this extremely restrictive scenario where only several test samples
are available. Specifically, we find that attacking the shallow layers of DNNs
trained on a …

attacks box networks neural networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Engineer (SPLUNK) | Remote US

@ Coalfire | United States

Cyber - AppSec - Web PT2

@ KPMG India | Bengaluru, Karnataka, India

Ingénieur consultant expérimenté en Risques Industriels - Etude de dangers, QRA (F-H-X)

@ Bureau Veritas Group | COURBEVOIE, Ile-de-France, FR

Malware Intern

@ SentinelOne | Bengaluru, Karnataka, India