Feb. 26, 2024, 5:11 a.m. | Lucas Fenaux, Florian Kerschbaum

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.14937v1 Announce Type: cross
Abstract: Adversarial examples are malicious inputs to machine learning models that trigger a misclassification. This type of attack has been studied for close to a decade, and we find that there is a lack of study and formalization of adversary knowledge when mounting attacks. This has yielded a complex space of attack research with hard-to-compare threat models and attacks. We focus on the image classification domain and provide a theoretical framework to study adversary knowledge inspired …

adversarial adversary arxiv attack attacks cs.cr cs.lg examples find framework inputs knowledge machine machine learning machine learning models malicious study trigger

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Specialist, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

Principal Security Researcher (Advanced Threat Prevention)

@ Palo Alto Networks | Santa Clara, CA, United States

EWT Infosec | IAM Technical Security Consultant - Manager

@ KPMG India | Bengaluru, Karnataka, India

Security Engineering Operations Manager

@ Gusto | San Francisco, CA; Denver, CO; Remote

Network Threat Detection Engineer

@ Meta | Denver, CO | Reston, VA | Menlo Park, CA | Washington, DC