April 19, 2024, 4:11 a.m. | Zi Xiong, Lizhi Qing, Yangyang Kang, Jiawei Liu, Hongsong Li, Changlong Sun, Xiaozhong Liu, Wei Lu

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.12014v1 Announce Type: cross
Abstract: The widespread use of pre-trained language models (PLMs) in natural language processing (NLP) has greatly improved performance outcomes. However, these models' vulnerability to adversarial attacks (e.g., camouflaged hints from drug dealers), particularly in the Chinese language with its rich character diversity/variation and complex structures, hatches vital apprehension. In this study, we propose a novel method, CHinese vAriatioN Graph Enhancement (CHANGE), to increase the robustness of PLMs against character variation attacks in Chinese content. CHANGE presents …

adversarial adversarial attacks arxiv attack attacks chinese cs.cl cs.cr diversity drug graph integration language language models natural natural language natural language processing nlp outcomes performance robustness vulnerability

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Sr. Staff Firmware Engineer – Networking & Firewall

@ Axiado | Bengaluru, India

Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)

@ SAP | Walldorf, DE, 69190

SAP Security Administrator

@ FARO Technologies | EMEA-Portugal