Feb. 9, 2023, 2:10 a.m. | Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen

cs.CR updates on arXiv.org arxiv.org

Large-scale language models have achieved tremendous success across various
natural language processing (NLP) applications. Nevertheless, language models
are vulnerable to backdoor attacks, which inject stealthy triggers into models
for steering them to undesirable behaviors. Most existing backdoor attacks,
such as data poisoning, require further (re)training or fine-tuning language
models to learn the intended backdoor patterns. The additional training process
however diminishes the stealthiness of the attacks, as training a language
model usually requires long optimization time, a massive amount of …

applications attacks backdoor backdoor attacks data data poisoning free inject language language models large learn natural language natural language processing nlp patterns poisoning process scale training vulnerable

Security Specialist

@ Nestlé | St. Louis, MO, US, 63164

Cybersecurity Analyst

@ Dana Incorporated | Pune, MH, IN, 411057

Sr. Application Security Engineer

@ CyberCube | United States

Linux DevSecOps Administrator (Remote)

@ Accenture Federal Services | Arlington, VA

Cyber Security Intern or Co-op

@ Langan | Parsippany, NJ, US, 07054-2172

Security Advocate - Application Security

@ Datadog | New York, USA, Remote