all InfoSec news
SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks
May 21, 2024, 4:12 a.m. | Xuanli He, Qiongkai Xu, Jun Wang, Benjamin I. P. Rubinstein, Trevor Cohn
cs.CR updates on arXiv.org arxiv.org
Abstract: Modern NLP models are often trained on public datasets drawn from diverse sources, rendering them vulnerable to data poisoning attacks. These attacks can manipulate the model's behavior in ways engineered by the attacker. One such tactic involves the implantation of backdoors, achieved by poisoning specific training instances with a textual trigger and a target class label. Several strategies have been proposed to mitigate the risks associated with backdoor attacks by identifying and removing suspected poisoned …
arxiv attacker attacks backdoor backdoors can cs.cl cs.cr data data poisoning datasets nlp poisoning poisoning attacks public representation search tactic training vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Streaming Platform Engineer
@ Armis Security | Tel Aviv-Yafo, Tel Aviv District, Israel
Senior Streaming Platform Engineer
@ Armis Security | Tel Aviv-Yafo, Tel Aviv District, Israel
Deputy Chief Information Officer of Operations (Senior Public Service Administrator, Opt. 3)
@ State of Illinois | Springfield, IL, US, 62701-1222
Deputy Chief Information Officer of Operations (Senior Public Service Administrator, Opt. 3)
@ State of Illinois | Springfield, IL, US, 62701-1222
Analyst, Security
@ DailyPay | New York City
Analyst, Security
@ DailyPay | New York City