Sept. 30, 2022, 1:20 a.m. | Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

cs.CR updates on arXiv.org arxiv.org

Black-box attacks can generate adversarial examples without accessing the
parameters of target model, largely exacerbating the threats of deployed deep
neural networks (DNNs). However, previous works state that black-box attacks
fail to mislead target models when their training data and outputs are
inaccessible. In this work, we argue that black-box attacks can pose practical
attacks in this extremely restrictive scenario where only several test samples
are available. Specifically, we find that attacking the shallow layers of DNNs
trained on a …

attacks box networks neural networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Physical Security Operations Center - Supervisor

@ Equifax | USA-GA-Alpharetta-JVW3

Network Cybersecurity Engineer - Overland Park, KS Hybrid

@ Black & Veatch | Overland Park, KS, US

Cloud Security Engineer

@ Point72 | United States

Technical Program Manager, Security and Compliance, Cloud Compute

@ Google | New York City, USA; Kirkland, WA, USA

EWT Security | Vulnerability Management Analyst - AM

@ KPMG India | Gurgaon, Haryana, India