March 3, 2023, 2:10 a.m. | Sahar Sadrizadeh, AmirHossein Dabiri Aghdam, Ljiljana Dolamic, Pascal Frossard

cs.CR updates on arXiv.org arxiv.org

Neural Machine Translation (NMT) systems are used in various applications.
However, it has been shown that they are vulnerable to very small perturbations
of their inputs, known as adversarial attacks. In this paper, we propose a new
targeted adversarial attack against NMT models. In particular, our goal is to
insert a predefined target keyword into the translation of the adversarial
sentence while maintaining similarity between the original sentence and the
perturbed one in the source domain. To this aim, we …

adversarial adversarial attacks applications attack attacks inputs machine similarity systems target vulnerable

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Compliance Architect - Experian Health (Can be REMOTE from anywhere in the US)

@ Experian | ., ., United States

IT Security Specialist

@ Ørsted | Kuala Lumpur, MY

Senior, Cyber Security Analyst

@ Peloton | New York City

Cyber Security Engineer | Perimeter | Firewall

@ Garmin Cluj | Cluj-Napoca, Cluj County, Romania

Pentester / Ethical Hacker Web/API - Vast/Freelance

@ Resillion | Brussels, Belgium