April 7, 2023, 1:10 a.m. | Zhenting Wang, Kai Mei, Juan Zhai, Shiqing Ma

cs.CR updates on arXiv.org arxiv.org

The backdoor attack, where the adversary uses inputs stamped with triggers
(e.g., a patch) to activate pre-planted malicious behaviors, is a severe threat
to Deep Neural Network (DNN) models. Trigger inversion is an effective way of
identifying backdoor models and understanding embedded adversarial behaviors. A
challenge of trigger inversion is that there are many ways of constructing the
trigger. Existing methods cannot generalize to various types of triggers by
making certain assumptions or attack-specific constraints. The fundamental
reason is that …

adversarial adversary attack backdoor challenge constraints design embedded framework inputs making malicious network neural network patch space threat trigger types understanding unicorn work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Intermediate Security Engineer, (Incident Response, Trust & Safety)

@ GitLab | Remote, US

Journeyman Cybersecurity Triage Analyst

@ Peraton | Linthicum, MD, United States

Project Manager II - Compliance

@ Critical Path Institute | Tucson, AZ, USA

Junior System Engineer (m/w/d) Cyber Security 1

@ Deutsche Telekom | Leipzig, Deutschland