Nov. 4, 2022, 1:20 a.m. | Linshan Hou, Zhongyun Hua, Yuhong Li, Leo Yu Zhang

cs.CR updates on arXiv.org arxiv.org

Recent studies show that deep neural networks (DNNs) are vulnerable to
backdoor attacks. A backdoor DNN model behaves normally with clean inputs,
whereas outputs attacker's expected behaviors when the inputs contain a
pre-defined pattern called a trigger. However, in some tasks, the attacker
cannot know the exact target that shows his/her expected behavior, because the
task may contain a large number of classes and the attacker does not have full
access to know the semantic details of these classes. Thus, …

attack backdoor deep learning paradigm

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

SOC Cyber Threat Intelligence Expert

@ Amexio | Luxembourg, Luxembourg, Luxembourg

Systems Engineer - SecOps

@ Fortinet | Dubai, Dubai, United Arab Emirates

Ingénieur Cybersécurité Gouvernance des projets AMR H/F

@ ASSYSTEM | Lyon, France

Senior DevSecOps Consultant

@ Computacenter | Birmingham, GB, B37 7YS