Feb. 8, 2023, 2:10 a.m. | Ning Lu, Shengcai Liu, Zhirui Zhang, Qi Wang, Haifeng Liu, Ke Tang

cs.CR updates on arXiv.org arxiv.org

Word-level textual adversarial attacks have achieved striking performance in
fooling natural language processing models. However, the fundamental questions
of why these attacks are effective, and the intrinsic properties of the
adversarial examples (AEs), are still not well understood. This work attempts
to interpret textual attacks through the lens of $n$-gram frequency.
Specifically, it is revealed that existing word-level attacks exhibit a strong
tendency toward generation of examples with $n$-gram frequency descend
($n$-FD). Intuitively, this finding suggests a natural way to …

adversarial adversarial attacks aes attack attacks language natural language natural language processing performance questions understanding word work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

EY GDS Internship Program - SAP, Cyber, IT Consultant or Finance Talents with German language

@ EY | Wrocław, DS, PL, 50-086

Security Architect - 100% Remote (REF1604S)

@ Citizant | Chantilly, VA, United States

Network Security Engineer - Firewall admin (f/m/d)

@ Deutsche Börse | Prague, CZ

Junior Cyber Solutions Consultant

@ Dionach | Glasgow, Scotland, United Kingdom

Senior Software Engineer (Cryptography), Bitkey

@ Block | New York City, United States