June 24, 2022, 1:20 a.m. | Haojing Shen, Sihong Chen, Ran Wang, Xizhao Wang

cs.CR updates on arXiv.org arxiv.org

In this paper, we propose a defence strategy to improve adversarial
robustness by incorporating hidden layer representation. The key of this
defence strategy aims to compress or filter input information including
adversarial perturbation. And this defence strategy can be regarded as an
activation function which can be applied to any kind of neural network. We also
prove theoretically the effectiveness of this defense strategy under certain
conditions. Besides, incorporating hidden layer representation we propose three
types of adversarial attacks to …

adversarial attacks hidden lg representation

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

IAM Engineer - SailPoint IIQ

@ IDMWORKS | Remote USA

Manager, Network Security

@ NFL | New York City, United States

Engineering Team Manager – Security Controls

@ H&M Group | Stockholm, Sweden

Senior Security Consultant

@ LRQA | USA, US