all InfoSec news
One-shot Neural Backdoor Erasing via Adversarial Weight Masking. (arXiv:2207.04497v1 [cs.LG])
July 12, 2022, 1:20 a.m. | Shuwen Chai, Jinghui Chen
cs.CR updates on arXiv.org arxiv.org
Recent studies show that despite achieving high accuracy on a number of
real-world applications, deep neural networks (DNNs) can be backdoored: by
injecting triggered data samples into the training dataset, the adversary can
mislead the trained model into classifying any test data to the target class as
long as the trigger pattern is presented. To nullify such backdoor threats,
various methods have been proposed. Particularly, a line of research aims to
purify the potentially compromised model. However, one major limitation …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Cyber Threat Defense - PAM Manager
@ PwC | Amsterdam - Thomas R. Malthusstraat 5
InfoSec Specialist
@ Deutsche Bank | Bucharest
DevSecOps Engineer
@ Swiss Re | Bengaluru, KA, IN