April 19, 2023, 1:10 a.m. | Xiaomei Zhang, Zhaoxi Zhang, Qi Zhong, Xufei Zheng, Yanjun Zhang, Shengshan Hu, Leo Yu Zhang

cs.CR updates on arXiv.org arxiv.org

Adversarial attacks are a serious threat to the reliable deployment of
machine learning models in safety-critical applications. They can misguide
current models to predict incorrectly by slightly modifying the inputs.
Recently, substantial work has shown that adversarial examples tend to deviate
from the underlying data manifold of normal examples, whereas pre-trained
masked language models can fit the manifold of normal NLP data. To explore how
to use the masked language model in adversarial detection, we propose a novel
textual adversarial …

adversarial adversarial attacks applications attacks critical current data deployment detection inputs language language models machine machine learning machine learning models manifold nlp novel predict safety safety-critical serious threat work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Penetration Tester

@ Resillion | Bengaluru, India

Senior Backend Software Engineer (Java) - Privacy Engineering (Open to remote across ANZ)

@ Canva | Sydney, Australia

(Senior) Information Security Professional (w/m/d)

@ IONOS | Deutschland - Remote

Information Security (Incident Response) Intern

@ Eurofins | Katowice, Poland

Game Penetration Tester

@ Magic Media | Belgrade, Vojvodina, Serbia - Remote