March 20, 2023, 1:10 a.m. | Peiyu Xiong, Michael Tegegn, Jaskeerat Singh Sarin, Shubhraneel Pal, Julia Rubin

cs.CR updates on arXiv.org arxiv.org

Adversarial examples are inputs to machine learning models that an attacker
has intentionally designed to confuse the model into making a mistake. Such
examples pose a serious threat to the applicability of machine-learning-based
systems, especially in life- and safety-critical domains. To address this
problem, the area of adversarial robustness investigates mechanisms behind
adversarial attacks and defenses against these attacks. This survey reviews
literature that focuses on the effects of data used by a model on the model's
adversarial robustness. It …

address adversarial area attacks critical data domains inputs life literature machine machine learning machine learning models making mistake problem reviews robustness safety safety-critical serious survey systems threat

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

IT Security Engineer

@ People Profilers | Singapore, Singapore, Singapore

Consultant - DFIR - EMEA (SA)

@ Control Risks | Johannesburg, Gauteng, South Africa

Consultant Sénior Cyber Sécurité H/F

@ Hifield | Lyon, France