Oct. 11, 2022, 1:20 a.m. | Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

cs.CR updates on arXiv.org arxiv.org

Black-box attacks can generate adversarial examples without accessing the
parameters of target model, largely exacerbating the threats of deployed deep
neural networks (DNNs). However, previous works state that black-box attacks
fail to mislead target models when their training data and outputs are
inaccessible. In this work, we argue that black-box attacks can pose practical
attacks in this extremely restrictive scenario where only several test samples
are available. Specifically, we find that attacking the shallow layers of DNNs
trained on a …

attacks box networks neural networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior Manager, Security Compliance (Customer Trust)

@ Box | Tokyo

Cyber Security Engineering Specialist

@ SITEC Consulting | St. Louis, MO, USA 63101

Technical Security Analyst

@ Spire Healthcare | United Kingdom

Embedded Threat Intelligence Team Account Manager

@ Sibylline Ltd | Austin, Texas, United States

Bank Protection Security Officer

@ Allied Universal | Portland, OR, United States