Web: http://arxiv.org/abs/2209.06388

Sept. 15, 2022, 1:20 a.m. | Yanyun Wang, Dehui Du, Yuanhao Liu

cs.CR updates on arXiv.org arxiv.org

Deep neural network (DNN) classifiers are vulnerable to adversarial attacks.
Although the existing gradient-based attacks have achieved good performance in
feed-forward model and image recognition tasks, the extension for time series
classification in the recurrent neural network (RNN) remains a dilemma, because
the cyclical structure of RNN prevents direct model differentiation and the
visual sensitivity to perturbations of time series data challenges the
traditional local optimization objective to minimize perturbation. In this
paper, an efficient and widely applicable approach called …

adversarial network neural network optimization quality

Chief Information Security Officer

@ Los Angeles Unified School District | Los Angeles

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Sr. Malware Researcher - Windows Software Engineer

@ SentinelOne | Brno, South Moravian, Czech Republic

Senior Cyber Security Incident Response Analyst

@ ServiceNow | Dublin, Ireland

Staff, Privacy Compliance Monitoring

@ Coupang | Seoul, South Korea

VULNERABILITY MANAGER

@ Security Bank | Makati, Makati, Philippines

Cyber Security Analyst

@ Avery Dennison | Bengaluru/Remote, India

Security Incident Response Manager (Remote, Americas)

@ Shopify | Dallas, TX, United States

Sr. Compliance Specialist (Screening)

@ Coupang | Seoul, South Korea