all InfoSec news
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models. (arXiv:2305.14710v1 [cs.CL])
cs.CR updates on arXiv.org arxiv.org
Instruction-tuned models are trained on crowdsourcing datasets with task
instructions to achieve superior performance. However, in this work we raise
security concerns about this training paradigm. Our studies demonstrate that an
attacker can inject backdoors by issuing very few malicious instructions among
thousands of gathered data and control model behavior through data poisoning,
without even the need of modifying data instances or labels themselves. Through
such instruction attacks, the attacker can achieve over 90% attack success rate
across four commonly …
backdoor backdoors crowdsourcing data datasets inject language language models large malicious paradigm performance security studies task training vulnerabilities work