all InfoSec news
Distribution-restrained Softmax Loss for the Model Robustness. (arXiv:2303.12363v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Recently, the robustness of deep learning models has received widespread
attention, and various methods for improving model robustness have been
proposed, including adversarial training, model architecture modification,
design of loss functions, certified defenses, and so on. However, the principle
of the robustness to attacks is still not fully understood, also the related
research is still not sufficient. Here, we have identified a significant factor
that affects the robustness of models: the distribution characteristics of
softmax values for non-real label samples. …
adversarial architecture attacks attention certified deep learning design distribution factor functions loss modification research robustness training