all InfoSec news
Evading Black-box Classifiers Without Breaking Eggs
Feb. 15, 2024, 5:10 a.m. | Edoardo Debenedetti, Nicholas Carlini, Florian Tram\`er
cs.CR updates on arXiv.org arxiv.org
Abstract: Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out "bad" data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as "bad" come at a higher cost because they trigger additional security …
adversarial aim arxiv attacks bad box breaking cost critical cs.cr cs.lg data decision evasion evasion attacks examples machine machine learning malware metric query security stat.ml systems work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Sr. Application Security Engineer
@ CyberCube | Tallinn
Security Incident Response Analyst
@ Oracle | KITCHENER, ON, Canada
Senior Security Engineer
@ Minitab | Americas Remote