May 23, 2022, 1:20 a.m. | Jiankai Jin, Olga Ohrimenko, Benjamin I. P. Rubinstein

cs.CR updates on arXiv.org arxiv.org

Adversarial examples pose a security risk as they can alter a classifier's
decision through slight perturbations to a benign input. Certified robustness
has been proposed as a mitigation strategy where given an input $x$, a
classifier returns a prediction and a radius with a provable guarantee that any
perturbation to $x$ within this radius (e.g., under the $L_2$ norm) will not
alter the classifier's prediction. In this work, we show that these guarantees
can be invalidated due to limitations of …

attacks certified

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Staff DFIR Investigator

@ SentinelOne | United States - Remote

Senior Consultant.e (H/F) - Product & Industrial Cybersecurity

@ Wavestone | Puteaux, France

Information Security Analyst

@ StarCompliance | York, United Kingdom, Hybrid

Senior Cyber Security Analyst (IAM)

@ New York Power Authority | White Plains, US