March 17, 2023, 1:10 a.m. | Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl

cs.CR updates on arXiv.org arxiv.org

Consider patch attacks, where at test-time an adversary manipulates a test
image with a patch in order to induce a targeted misclassification. We consider
a recent defense to patch attacks, Patch-Cleanser (Xiang et al. [2022]). The
Patch-Cleanser algorithm requires a prediction model to have a ``two-mask
correctness'' property, meaning that the prediction model should correctly
classify any image when any two blank masks replace portions of the image.
Xiang et al. learn a prediction model to be robust to two-mask …

adversary algorithm attacks correctness defense masks order patch prediction robustness test

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC