April 26, 2024, 4:11 a.m. | Huming Qiu, Junjie Sun, Mi Zhang, Xudong Pan, Min Yang

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.04902v2 Announce Type: replace
Abstract: Deep neural networks (DNNs) are susceptible to backdoor attacks, where malicious functionality is embedded to allow attackers to trigger incorrect classifications. Old-school backdoor attacks use strong trigger features that can easily be learned by victim models. Despite robustness against input variation, the robustness however increases the likelihood of unintentional trigger activations. This leaves traces to existing defenses, which find approximate replacements for the original triggers that can activate the backdoor without being identical to the …

art arxiv attackers attacks backdoor backdoor attacks can cs.ai cs.cr defense embedded evade features input malicious networks neural networks old robustness school state trigger victim

PMO Cybersécurité H/F

@ Hifield | Sèvres, France

Third Party Risk Management - Consultant

@ KPMG India | Bengaluru, Karnataka, India

Consultant Cyber Sécurité H/F - Strasbourg

@ Hifield | Strasbourg, France

Information Security Compliance Analyst

@ KPMG Australia | Melbourne, Australia

GDS Consulting - Cyber Security | Data Protection Senior Consultant

@ EY | Taguig, PH, 1634

Senior QA Engineer - Cloud Security

@ Tenable | Israel