all InfoSec news
Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models
March 1, 2024, 5:11 a.m. | Pengzhou Cheng, Wei Du, Zongru Wu, Fengwei Zhang, Libo Chen, Gongshen Liu
cs.CR updates on arXiv.org arxiv.org
Abstract: Pre-trained language models (PLMs) have been found susceptible to backdoor attacks, which can transfer vulnerabilities to various downstream tasks. However, existing PLM backdoors are conducted with explicit triggers under the manually aligned, thus failing to satisfy expectation goals simultaneously in terms of effectiveness, stealthiness, and universality. In this paper, we propose a novel approach to achieve invisible and general backdoor implantation, called \textbf{Syntactic Ghost} (synGhost for short). Specifically, the method hostilely manipulates poisoned samples with …
arxiv attacks backdoor backdoor attacks backdoors can cs.ai cs.cl cs.cr explicit found general ghost goals language language models purpose terms transfer under vulnerabilities
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
DevSecOps Engineer
@ LinQuest | Beavercreek, Ohio, United States
Senior Developer, Vulnerability Collections (Contractor)
@ SecurityScorecard | Remote (Turkey or Latin America)
Cyber Security Intern 03416 NWSOL
@ North Wind Group | RICHLAND, WA
Senior Cybersecurity Process Engineer
@ Peraton | Fort Meade, MD, United States
Sr. Manager, Cybersecurity and Info Security
@ AESC | Smyrna, TN 37167, Smyrna, TN, US | Santa Clara, CA 95054, Santa Clara, CA, US | Florence, SC 29501, Florence, SC, US | Bowling Green, KY 42101, Bowling Green, KY, US