March 29, 2023, 1:10 a.m. | Ruyi Ding, Cheng Gongye, Siyue Wang, Aidong Ding, Yunsi Fei

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-small
changes crafted deliberately on the input to mislead the model for wrong
predictions. Adversarial attacks have disastrous consequences for deep
learning-empowered critical applications. Existing defense and detection
techniques both require extensive knowledge of the model, testing inputs, and
even execution details. They are not viable for general deep learning
implementations where the model internal is unknown, a common 'black-box'
scenario for model users. Inspired by the fact that electromagnetic (EM)
emanations …

adversarial adversarial attacks applications attacks box channel critical data deep learning defense detection empowered fact general input inputs internal knowledge networks neural networks operations predictions scenario side-channel techniques testing vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cloud Technical Solutions Engineer, Security

@ Google | Mexico City, CDMX, Mexico

Assoc Eng Equipment Engineering

@ GlobalFoundries | SGP - Woodlands

Staff Security Engineer, Cloud Infrastructure

@ Flexport | Bellevue, WA; San Francisco, CA

Software Engineer III, Google Cloud Security and Privacy

@ Google | Sunnyvale, CA, USA

Software Engineering Manager II, Infrastructure, Google Cloud Security and Privacy

@ Google | San Francisco, CA, USA; Sunnyvale, CA, USA