all InfoSec news
Distilling Adversarial Robustness Using Heterogeneous Teachers
Feb. 27, 2024, 5:11 a.m. | Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
cs.CR updates on arXiv.org arxiv.org
Abstract: Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e.g., self-driving cars or medical imaging. Recent work has demonstrated that robustness can be transferred from an adversarially trained teacher to a student model using knowledge distillation. However, current methods perform distillation using a single adversarial and vanilla teacher and consider homogeneous architectures (i.e., residual networks) that are susceptible to misclassify examples from similar adversarial …
adversarial adversarial attacks arxiv attacks can cars cs.cr cs.cv current domains driving imaging knowledge medical medical imaging network neural network resiliency robustness self-driving self-driving cars student teachers work
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 10 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 10 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 10 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Network Security Engineer
@ Meta | Menlo Park, CA | Remote, US
Security Engineer, Investigations - i3
@ Meta | Washington, DC
Threat Investigator- Security Analyst
@ Meta | Menlo Park, CA | Seattle, WA | Washington, DC
Security Operations Engineer II
@ Microsoft | Redmond, Washington, United States
Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Bug Bounty -- Associate -- Dallas
@ Goldman Sachs | Dallas, Texas, United States