Web: http://arxiv.org/abs/2111.07970

April 28, 2022, 1:20 a.m. | Leilei Gan, Jiwei Li, Tianwei Zhang, Xiaoya Li, Yuxian Meng, Fei Wu, Yi Yang, Shangwei Guo, Chun Fan

cs.CR updates on arXiv.org arxiv.org

Backdoor attacks pose a new threat to NLP models. A standard strategy to
construct poisoned data in backdoor attacks is to insert triggers (e.g., rare
words) into selected sentences and alter the original label to a target label.
This strategy comes with a severe flaw of being easily detected from both the
trigger and the label perspectives: the trigger injected, which is usually a
rare word, leads to an abnormal natural language expression, and thus can be
easily detected by …

attack backdoor nlp

More from arxiv.org / cs.CR updates on arXiv.org

Mid-Level Research Cyber Security Engineer (Hybrid options available)

@ Riverside Research | Beavercreek, Ohio

Security Intelligence Manager, Incident Response

@ Atlassian | Sydney, Australia

Security Consultant, Professional Services

@ Amazon.com | Seoul, KOR

Senior Cybersecurity Architect

@ Lucayan Technology Solutions LLC | Tampa, Florida, United States

Application Security Engineer

@ PlayStation Global | United States, San Francisco, CA

Security Engineer I, Offensive Security Penetration Testing

@ Amazon.com | US, TX, Virtual Location - Texas

Cyber Security Engineer

@ GWA Group | Derrimut, Victoria, Australia

Threat Intelligence Consultant- Remote (Anywhere in the U.S.)

@ GuidePoint Security LLC | Remote

Senior Cloud Security Engineer

@ Reddit | Atlanta, GA

Information Security Officer

@ Vix Technology | Cambridge, England, United Kingdom

Information Security Manager (12m FTC)

@ PlayStation Global | United Kingdom, London

Vulnerability Management Engineer (Qualys)

@ Aperia | Dallas, Texas, United States - Remote