all InfoSec news
Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training. (arXiv:2206.02158v1 [cs.CV])
June 7, 2022, 1:20 a.m. | Guodong Cao, Zhibo Wang, Xiaowei Dong, Zhifei Zhang, Hengchang Guo, Zhan Qin, Kui Ren
cs.CR updates on arXiv.org arxiv.org
Adversarial training has been widely explored for mitigating attacks against
deep models. However, most existing works are still trapped in the dilemma
between higher accuracy and stronger robustness since they tend to fit a model
towards robust features (not easily tampered with by adversaries) while
ignoring those non-robust but highly predictive features. To achieve a better
robustness-accuracy trade-off, we propose the Vanilla Feature Distillation
Adversarial Training (VFD-Adv), which conducts knowledge distillation from a
pre-trained model (optimized towards high accuracy) to …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
GRC Analyst
@ Richemont | Shelton, CT, US
Security Specialist
@ Peraton | Government Site, MD, United States
Information Assurance Security Specialist (IASS)
@ OBXtek Inc. | United States
Cyber Security Technology Analyst
@ Airbus | Bengaluru (Airbus)
Vice President, Cyber Operations Engineer
@ BlackRock | LO9-London - Drapers Gardens