Feb. 22, 2023, 2:10 a.m. | Aqib Rashid, Jose Such

cs.CR updates on arXiv.org arxiv.org

ML models are known to be vulnerable to adversarial query attacks. In these
attacks, queries are iteratively perturbed towards a particular class without
any knowledge of the target model besides its output. The prevalence of
remotely-hosted ML classification models and Machine-Learning-as-a-Service
platforms means that query attacks pose a real threat to the security of these
systems. To deal with this, stateful defenses have been proposed to detect
query attacks and prevent the generation of adversarial examples by monitoring
and analyzing …

adversarial as-a-service attacks class classification deal defense detect detection knowledge machine malware malware detection ml models platforms query security service systems target threat vulnerable

Senior Manager, Response Analytics & Insights (Fraud Threat Management)

@ Scotiabank | Toronto, ON, CA, M3C0N5

Cybersecurity Risk Analyst IV

@ Computer Task Group, Inc | Buffalo, NY, United States

Information System Security Engineer (ISSE) – Risk Management Framework (RMF), AWS, ACAS, ESS.

@ ARA | Raleigh, North Carolina, United States

2024 Fall Cybersecurity Engineering Intern | Novi, MI

@ Dana Incorporated | Novi, MI, US, 48377

Consultant Sharepoint

@ Talan | Luxembourg, Luxembourg

Senior Information Systems Security Officer (ISSO) - onsite Tucson, AZ

@ RTX | AZ842: RMS AP Bldg 842 1151 East Hermans Road Building 842, Tucson, AZ, 85756 USA