March 1, 2024, 5:11 a.m. | Fangyuan Zhang, Huichi Zhou, Shuangjiao Li, Hongtao Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.18792v1 Announce Type: cross
Abstract: Deep neural networks have been proven to be vulnerable to adversarial examples and various methods have been proposed to defend against adversarial attacks for natural language processing tasks. However, previous defense methods have limitations in maintaining effective defense while ensuring the performance of the original task. In this paper, we propose a malicious perturbation based adversarial training method (MPAT) for building robust deep neural networks against textual adversarial attacks. Specifically, we construct a multi-level malicious …

adversarial adversarial attacks arxiv attacks building cs.cl cs.cr cs.lg defense examples language limitations natural natural language natural language processing networks neural networks performance vulnerable

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC