all InfoSec news
One-shot Neural Backdoor Erasing via Adversarial Weight Masking. (arXiv:2207.04497v2 [cs.LG] UPDATED)
Nov. 3, 2022, 1:20 a.m. | Shuwen Chai, Jinghui Chen
cs.CR updates on arXiv.org arxiv.org
Recent studies show that despite achieving high accuracy on a number of
real-world applications, deep neural networks (DNNs) can be backdoored: by
injecting triggered data samples into the training dataset, the adversary can
mislead the trained model into classifying any test data to the target class as
long as the trigger pattern is presented. To nullify such backdoor threats,
various methods have been proposed. Particularly, a line of research aims to
purify the potentially compromised model. However, one major limitation …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
IT Security Manager
@ Teltonika | Vilnius/Kaunas, VL, LT
Security Officer - Part Time - Harrah's Gulf Coast
@ Caesars Entertainment | Biloxi, MS, United States
DevSecOps Full-stack Developer
@ Peraton | Fort Gordon, GA, United States
Cybersecurity Cooperation Lead
@ Peraton | Stuttgart, AE, United States
Cybersecurity Engineer - Malware & Forensics
@ ManTech | 201DU - Customer Site,Herndon, VA