Oct. 4, 2022, 1:20 a.m. | Xuwang Yin, Soheil Kolouri, Gustavo K. Rohde

cs.CR updates on arXiv.org arxiv.org

The vulnerabilities of deep neural networks against adversarial examples have
become a significant concern for deploying these models in sensitive domains.
Devising a definitive defense against such attacks is proven to be challenging,
and the methods relying on detecting adversarial samples are only valid when
the attacker is oblivious to the detection mechanism. In this paper we propose
a principled adversarial example detection method that can withstand
norm-constrained white-box attacks. Inspired by one-versus-the-rest
classification, in a K class classification problem, …

adversarial classification detection training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Consultant

@ Auckland Council | Central Auckland, NZ, 1010

Security Engineer, Threat Detection

@ Stripe | Remote, US

DevSecOps Engineer (Remote in Europe)

@ CloudTalk | Prague, Prague, Czechia - Remote

Security Architect

@ Valeo Foods | Dublin, Ireland

Security Specialist - IoT & OT

@ Wallbox | Barcelona, Catalonia, Spain