all InfoSec news
Backdoor Defense via Suppressing Model Shortcuts. (arXiv:2211.05631v1 [cs.CV])
Nov. 11, 2022, 2:20 a.m. | Sheng Yang, Yiming Li, Yong Jiang, Shu-Tao Xia
cs.CR updates on arXiv.org arxiv.org
Recent studies have demonstrated that deep neural networks (DNNs) are
vulnerable to backdoor attacks during the training process. Specifically, the
adversaries intend to embed hidden backdoors in DNNs so that malicious model
predictions can be activated through pre-defined trigger patterns. In this
paper, we explore the backdoor mechanism from the angle of the model structure.
We select the skip connection for discussions, inspired by the understanding
that it helps the learning of model `shortcuts' where backdoor triggers are
usually easier …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Regional Leader, Cyber Crisis Communications
@ Google | United Kingdom
Regional Intelligence Manager, Compliance, Safety and Risk Management
@ Google | London, UK
Senior Analyst, Endpoint Security
@ Scotiabank | Toronto, ON, CA, M1K5L1
Software Engineer, Security/Privacy, Google Cloud
@ Google | Bengaluru, Karnataka, India
Senior Security Engineer
@ Coinbase | Remote - USA