Feb. 22, 2023, 2:10 a.m. | Haibo Jin, Ruoxi Chen, Jinyin Chen, Yao Cheng, Chong Fu, Ting Wang, Yue Yu, Zhaoyan Ming

cs.CR updates on arXiv.org arxiv.org

The success of deep neural networks (DNNs) in real-world applications has
benefited from abundant pre-trained models. However, the backdoored pre-trained
models can pose a significant trojan threat to the deployment of downstream
DNNs. Existing DNN testing methods are mainly designed to find incorrect corner
case behaviors in adversarial settings but fail to discover the backdoors
crafted by strong trojan attacks. Observing the trojan network behaviors shows
that they are not just reflected by a single compromised neuron as proposed by …

adversarial applications attacks backdoor backdoors case critical deployment discover fail find fuzzing identification network networks neural networks path settings testing threat trojan world

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Dir-Information Security - Cyber Analytics

@ Marriott International | Bethesda, MD, United States

Security Engineer - Security Operations

@ TravelPerk | Barcelona, Barcelona, Spain

Information Security Mgmt- Risk Assessor

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SAP CO Consultant

@ Atos | Istanbul, TR