March 12, 2024, 4:11 a.m. | Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

cs.CR updates on arXiv.org arxiv.org

arXiv:2306.16614v3 Announce Type: replace-cross
Abstract: Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another set of target classes. To address the shortcomings of existing methods, …

arxiv attacks cs.ai cs.cr cs.cv cs.lg evasion evasion attacks find framework general identify inputs machine measuring metrics real robustness threat vulnerable work world

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)

@ WWC Global | Reston, Virginia, United States

Security Architect (DevSecOps)

@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium

Infrastructure Security Architect

@ Ørsted | Kuala Lumpur, MY

Contract Penetration Tester

@ Evolve Security | United States - Remote

Senior Penetration Tester

@ DigitalOcean | Canada