all InfoSec news
Backdoor Cleansing with Unlabeled Data. (arXiv:2211.12044v1 [cs.LG])
Nov. 23, 2022, 2:20 a.m. | Lu Pang, Tao Sun, Haibin Ling, Chao Chen
cs.CR updates on arXiv.org arxiv.org
Due to the increasing computational demand of Deep Neural Networks (DNNs),
companies and organizations have begun to outsource the training process.
However, the externally trained DNNs can potentially be backdoor attacked. It
is crucial to defend against such attacks, i.e., to postprocess a suspicious
model so that its backdoor behavior is mitigated while its normal prediction
power on clean inputs remain uncompromised. To remove the abnormal backdoor
behavior, existing methods mostly rely on additional labeled clean samples.
However, such requirement …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 3 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 3 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 3 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 3 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cyber Security Architect - SR
@ ERCOT | Taylor, TX
SOC Analyst
@ Wix | Tel Aviv, Israel
Associate Director, SIEM & Detection Engineering(remote)
@ Humana | Remote US
Senior DevSecOps Architect
@ Computacenter | Birmingham, GB, B37 7YS