Feb. 23, 2023, 2:10 a.m. | Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, Prateek Mittal

cs.CR updates on arXiv.org arxiv.org

The bulk of existing research in defending against adversarial examples
focuses on defending against a single (typically bounded Lp-norm) attack, but
for a practical setting, machine learning (ML) models should be robust to a
wide variety of attacks. In this paper, we present the first unified framework
for considering multiple attacks against ML models. Our framework is able to
model different levels of learner's knowledge about the test-time adversary,
allowing us to model robustness against unforeseen attacks and robustness
against …

adversarial adversary attack attacks framework knowledge machine machine learning ml models research robustness single test

Cyber Security Network Engineer

@ Nine | North Sydney, Australia

Professional, IAM Security

@ Ingram Micro | Manila Shared Services Center

Principal Windows Threat & Detection Security Researcher (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Security Engineer - IT Infra Security Architecture

@ Coupang | Seoul, South Korea

Senior Security Engineer

@ LiquidX | Singapore, Central Singapore, Singapore

Application Security Engineer

@ Solidigm | Zapopan, Mexico