all InfoSec news
Machine Translation Models Stand Strong in the Face of Adversarial Attacks. (arXiv:2309.06527v1 [cs.CL])
cs.CR updates on arXiv.org arxiv.org
Adversarial attacks expose vulnerabilities of deep learning models by
introducing minor perturbations to the input, which lead to substantial
alterations in the output. Our research focuses on the impact of such
adversarial attacks on sequence-to-sequence (seq2seq) models, specifically
machine translation models. We introduce algorithms that incorporate basic text
perturbation heuristics and more advanced strategies, such as the
gradient-based attack, which utilizes a differentiable approximation of the
inherently non-differentiable translation metric. Through our investigation, we
provide evidence that machine translation models …
adversarial adversarial attacks algorithms attacks basic deep learning expose impact input machine research translation vulnerabilities