Feb. 20, 2024, 5:11 a.m. | Zongru Wu, Zhuosheng Zhang, Pengzhou Cheng, Gongshen Liu

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.12026v1 Announce Type: cross
Abstract: Despite the notable success of language models (LMs) in various natural language processing (NLP) tasks, the reliability of LMs is susceptible to backdoor attacks. Prior research attempts to mitigate backdoor learning while training the LMs on the poisoned dataset, yet struggles against complex backdoor attacks in real-world scenarios. In this paper, we investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis. Our findings indicate that the backdoor mapping presented on …

acquiring arxiv backdoor cs.ai cs.cl cs.cr datasets language language models space

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

GRC Analyst

@ Richemont | Shelton, CT, US

Security Specialist

@ Peraton | Government Site, MD, United States

Information Assurance Security Specialist (IASS)

@ OBXtek Inc. | United States

Cyber Security Technology Analyst

@ Airbus | Bengaluru (Airbus)

Vice President, Cyber Operations Engineer

@ BlackRock | LO9-London - Drapers Gardens