June 2, 2023, 1:10 a.m. | Mohammed Alkhowaiter, Hisham Kholidy, Mnassar Alyami, Abdulmajeed Alghamdi, Cliff Zou

cs.CR updates on arXiv.org arxiv.org

Deep learning models have been used in creating various effective image
classification applications. However, they are vulnerable to adversarial
attacks that seek to misguide the models into predicting incorrect classes. Our
study of major adversarial attack models shows that they all specifically
target and exploit the neural networking structures in their designs. This
understanding makes us develop a hypothesis that most classical machine
learning models, such as Random Forest (RF), are immune to adversarial attack
models because they do not …

adversarial adversarial attacks applications attack attacks aware classification deep learning exploit machine machine learning major study system target verification vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote