July 2, 2024, 4:14 a.m. | Linshan Hou, Zhongyun Hua, Yuhong Li, Yifeng Zheng, Leo Yu Zhang

cs.CR updates on arXiv.org arxiv.org

arXiv:2211.01875v2 Announce Type: replace
Abstract: Deep neural networks (DNNs) are vulnerable to backdoor attacks, where a backdoored model behaves normally with clean inputs but exhibits attacker-specified behaviors upon the inputs containing triggers. Most previous backdoor attacks mainly focus on either the all-to-one or all-to-all paradigm, allowing attackers to manipulate an input to attack a single target class. Besides, the two paradigms rely on a single trigger for backdoor activation, rendering attacks ineffective if the trigger is destroyed. In …

arxiv attack attacker attackers attacks backdoor backdoor attacks behaviors cs.cr deep learning focus inputs networks neural networks paradigm target trigger vulnerable

Senior Software Java Developer

@ Swiss Re | Madrid, M, ES

Product Owner (Hybrid) - 19646

@ HII | Fort Belvoir, VA, Virginia, United States

Sr. Operations Research Analyst

@ HII | Albuquerque, NM, New Mexico, United States

Lead SME Platform Architect

@ General Dynamics Information Technology | USA VA Falls Church - 3150 Fairview Park Dr (VAS095)

DevOps Engineer (Hybrid) - 19526

@ HII | San Antonio, TX, Texas, United States

Cloud Platform Engineer (Hybrid) - 19535

@ HII | Greer, SC, South Carolina, United States