Sept. 16, 2022, 1:20 a.m. | Alexander Cann, Ian Colbert, Ihab Amer

cs.CR updates on arXiv.org arxiv.org

The widespread adoption of deep neural networks in computer vision
applications has brought forth a significant interest in adversarial
robustness. Existing research has shown that maliciously perturbed inputs
specifically tailored for a given model (i.e., adversarial examples) can be
successfully transferred to another independently trained model to induce
prediction errors. Moreover, this property of adversarial examples has been
attributed to features derived from predictive patterns in the data
distribution. Thus, we are motivated to investigate the following question: Can
adversarial …

adversaries box networks

Information Security Engineers

@ D. E. Shaw Research | New York City

Embedded Penetration Tester - Cyber Security Team [BGSW]

@ Bosch Group | Warszawa, Poland

Staff Cybersecurity Engineer

@ Torc Robotics | Blacksburg, VA; Remote, US

Cybersecurity Engineer

@ Tiro Solutions Group LLC | Downers Grove, Illinois, United States

Director, Network Compliance

@ Marriott International | Bethesda, MD, United States

Cybersecurity Manager

@ Tiro Solutions Group LLC | Downers Grove, Illinois, United States