Feb. 22, 2023, 2:10 a.m. | Sihui Dai, Wenxin Ding, Arjun Nitin Bhagoji, Daniel Cullina, Ben Y. Zhao, Haitao Zheng, Prateek Mittal

cs.CR updates on arXiv.org arxiv.org

Finding classifiers robust to adversarial examples is critical for their safe
deployment. Determining the robustness of the best possible classifier under a
given threat model for a given data distribution and comparing it to that
achieved by state-of-the-art training methods is thus an important diagnostic
tool. In this paper, we find achievable information-theoretic lower bounds on
loss in the presence of a test-time attacker for multi-class classifiers on any
discrete dataset. We provide a general framework for finding the optimal …

adversarial art class classification critical data deployment distribution find important information loss robustness safe state test threat threat model tool training under

Financial Crimes Compliance - Senior - Consulting - Location Open

@ EY | New York City, US, 10001-8604

Software Engineer - Cloud Security

@ Neo4j | Malmö

Security Consultant

@ LRQA | Singapore, Singapore, SG, 119963

Identity Governance Consultant

@ Allianz | Sydney, NSW, AU, 2000

Educator, Cybersecurity

@ Brain Station | Toronto

Principal Security Engineer

@ Hippocratic AI | Palo Alto