all InfoSec news
Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks. (arXiv:2210.14405v1 [cs.CV])
Oct. 27, 2022, 1:20 a.m. | Isaac Wasserman
cs.CR updates on arXiv.org arxiv.org
Convolutional neural network-based medical image classifiers have been shown
to be especially susceptible to adversarial examples. Such instabilities are
likely to be unacceptable in the future of automated diagnoses. Though
statistical adversarial example detection methods have proven to be effective
defense mechanisms, additional research is necessary that investigates the
fundamental vulnerabilities of deep-learning-based systems and how best to
build models that jointly maximize traditional and robust accuracy. This paper
presents the inclusion of attention mechanisms in CNN-based medical image
classifiers …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Penetration Tester
@ Resillion | Bengaluru, India
Senior Backend Software Engineer (Java) - Privacy Engineering (Open to remote across ANZ)
@ Canva | Sydney, Australia
(Senior) Information Security Professional (w/m/d)
@ IONOS | Deutschland - Remote
Information Security (Incident Response) Intern
@ Eurofins | Katowice, Poland
Game Penetration Tester
@ Magic Media | Belgrade, Vojvodina, Serbia - Remote