March 17, 2023, 1:10 a.m. | Saba Ahmadi, Avrim Blum, Omar Montasser, Kevin Stangl

cs.CR updates on arXiv.org arxiv.org

Consider patch attacks, where at test-time an adversary manipulates a test
image with a patch in order to induce a targeted misclassification. We consider
a recent defense to patch attacks, Patch-Cleanser (Xiang et al. [2022]). The
Patch-Cleanser algorithm requires a prediction model to have a ``two-mask
correctness'' property, meaning that the prediction model should correctly
classify any image when any two blank masks replace portions of the image.
Xiang et al. learn a prediction model to be robust to two-mask …

adversary algorithm attacks correctness defense masks order patch prediction robustness test

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Data Privacy Manager m/f/d)

@ Coloplast | Hamburg, HH, DE

Cybersecurity Sr. Manager

@ Eastman | Kingsport, TN, US, 37660

KDN IAM Associate Consultant

@ KPMG India | Hyderabad, Telangana, India

Learning Experience Designer in Cybersecurity (f/m/div.) (Salary: ~113.000 EUR p.a.*)

@ Bosch Group | Stuttgart, Germany

Senior Security Engineer - SIEM

@ Samsara | Remote - US