Sept. 14, 2023, 1:10 a.m. | Pavel Burnyshev, Elizaveta Kostenok, Alexey Zaytsev

cs.CR updates on arXiv.org arxiv.org

Adversarial attacks expose vulnerabilities of deep learning models by
introducing minor perturbations to the input, which lead to substantial
alterations in the output. Our research focuses on the impact of such
adversarial attacks on sequence-to-sequence (seq2seq) models, specifically
machine translation models. We introduce algorithms that incorporate basic text
perturbation heuristics and more advanced strategies, such as the
gradient-based attack, which utilizes a differentiable approximation of the
inherently non-differentiable translation metric. Through our investigation, we
provide evidence that machine translation models …

adversarial adversarial attacks algorithms attacks basic deep learning expose impact input machine research translation vulnerabilities

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior Product Delivery Associate - Cybersecurity | CyberOps

@ JPMorgan Chase & Co. | NY, United States

Security Ops Infrastructure Engineer (Remote US):

@ RingCentral | Remote, USA

SOC Analyst-1

@ NTT DATA | Bengaluru, India