March 14, 2024, 4:11 a.m. | Yanyun Wang, Dehui Du, Haibo Hu, Zi Liang, Yuanhao Liu

cs.CR updates on arXiv.org arxiv.org

arXiv:2209.06388v3 Announce Type: replace-cross
Abstract: Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC). However, neural networks (NNs) are vulnerable to adversarial samples, which cause real-life adversarial attacks that undermine the robustness of AI models. To date, most existing attacks target at feed-forward NNs and image recognition tasks, but they cannot perform well on RNN-based TSC. This is due to the cyclical computation of RNN, which prevents direct model differentiation. In addition, …

adversarial adversarial attacks ai models arxiv attack attacks classification cs.cr cs.lg feed forward life network networks neural network neural networks real robustness series target vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Salesforce Solution Consultant

@ BeyondTrust | Remote United States

Divisional Deputy City Solicitor, Public Safety Compliance Counsel - Compliance and Legislation Unit

@ City of Philadelphia | Philadelphia, PA, United States

Security Engineer, IT IAM, EIS

@ Micron Technology | Hyderabad - Skyview, India

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

Werkstudent Cybersecurity (m/w/d)

@ Brose Group | Bamberg, DE, 96052