April 11, 2023, 1:10 a.m. | Hanbin Hong, Yuan Hong

cs.CR updates on arXiv.org arxiv.org

Black-box adversarial attacks have shown strong potential to subvert machine
learning models. Existing black-box adversarial attacks craft the adversarial
examples by iteratively querying the target model and/or leveraging the
transferability of a local surrogate model. Whether such attack can succeed
remains unknown to the adversary when empirically designing the attack. In this
paper, to our best knowledge, we take the first step to study a new paradigm of
adversarial attacks -- certifiable black-box attack that can guarantee the
attack success …

adversarial adversarial attacks adversary attack attacks box guarantee knowledge local machine machine learning machine learning models paradigm rate study target

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Staff DFIR Investigator

@ SentinelOne | United States - Remote

Senior Consultant.e (H/F) - Product & Industrial Cybersecurity

@ Wavestone | Puteaux, France

Information Security Analyst

@ StarCompliance | York, United Kingdom, Hybrid

Senior Cyber Security Analyst (IAM)

@ New York Power Authority | White Plains, US