all InfoSec news
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. (arXiv:2107.11630v2 [cs.LG] UPDATED)
June 17, 2022, 1:20 a.m. | Florian Tramèr
cs.CR updates on arXiv.org arxiv.org
Making classifiers robust to adversarial examples is hard. Thus, many
defenses tackle the seemingly easier task of detecting perturbed inputs. We
show a barrier towards this goal. We prove a general hardness reduction between
detection and classification of adversarial examples: given a robust detector
for attacks at distance {\epsilon} (in some metric), we can build a similarly
robust (but inefficient) classifier for attacks at distance {\epsilon}/2. Our
reduction is computationally inefficient, and thus cannot be used to build
practical classifiers. …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
SOC Cyber Threat Intelligence Expert
@ Amexio | Luxembourg, Luxembourg, Luxembourg
Systems Engineer - SecOps
@ Fortinet | Dubai, Dubai, United Arab Emirates
Ingénieur Cybersécurité Gouvernance des projets AMR H/F
@ ASSYSTEM | Lyon, France
Senior DevSecOps Consultant
@ Computacenter | Birmingham, GB, B37 7YS