all InfoSec news
On Fragile Features and Batch Normalization in Adversarial Training. (arXiv:2204.12393v1 [cs.LG])
April 27, 2022, 1:20 a.m. | Nils Philipp Walter, David Stutz, Bernt Schiele
cs.CR updates on arXiv.org arxiv.org
Modern deep learning architecture utilize batch normalization (BN) to
stabilize training and improve accuracy. It has been shown that the BN layers
alone are surprisingly expressive. In the context of robustness against
adversarial examples, however, BN is argued to increase vulnerability. That is,
BN helps to learn fragile features. Nevertheless, BN is still used in
adversarial training, which is the de-facto standard to learn robust features.
In order to shed light on the role of BN in adversarial training, we …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Digital Trust Cyber Transformation Analyst
@ KPMG India | Chennai, Tamil Nadu, India
Cyber Technical Associate Director - Emerging Technology and Assets
@ Accenture Federal Services | Arlington, VA
IT Security & Network Manager
@ AECOM | Basingstoke, United Kingdom
Sr. Cyber Security Analyst
@ New York Power Authority | White Plains, US