all InfoSec news
Improving Defensive Distillation using Teacher Assistant. (arXiv:2305.08076v1 [cs.CV])
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks pose a significant threat to the security and safety of
deep neural networks being applied to modern applications. More specifically,
in computer vision-based tasks, experts can use the knowledge of model
architecture to create adversarial samples imperceptible to the human eye.
These attacks can lead to security problems in popular applications such as
self-driving cars, face recognition, etc. Hence, building networks which are
robust to such attacks is highly desirable and essential. Among the various
methods present in …
adversarial adversarial attacks applications architecture assistant attacks computer computer vision defensive experts human knowledge networks neural networks popular problems safety security threat