all InfoSec news
Can Adversarial Training Be Manipulated By Non-Robust Features?. (arXiv:2201.13329v3 [cs.LG] UPDATED)
Oct. 6, 2022, 1:20 a.m. | Lue Tao, Lei Feng, Hongxin Wei, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen
cs.CR updates on arXiv.org arxiv.org
Adversarial training, originally designed to resist test-time adversarial
examples, has shown to be promising in mitigating training-time availability
attacks. This defense ability, however, is challenged in this paper. We
identify a novel threat model named stability attacks, which aims to hinder
robust availability by slightly manipulating the training data. Under this
threat, we show that adversarial training using a conventional defense budget
$\epsilon$ provably fails to provide test robustness in a simple statistical
setting, where the non-robust features of the …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cybersecurity Triage Analyst
@ Peraton | Linthicum, MD, United States
Associate DevSecOps Engineer
@ LinQuest | Los Angeles, California, United States
DORA Compliance Program Manager
@ Resillion | Brussels, Belgium
Head of Workplace Risk and Compliance
@ Wise | London, United Kingdom