Jan. 25, 2024, 2:10 a.m. | Zhengyao Song, Yongqiang Li, Danni Yuan, Li Liu, Shaokui Wei, Baoyuan Wu

cs.CR updates on arXiv.org arxiv.org

This work explores an emerging security threat against deep neural networks
(DNNs) based image classification, i.e., backdoor attack. In this scenario, the
attacker aims to inject a backdoor into the model by manipulating training
data, such that the backdoor could be activated by a particular trigger and
bootstraps the model to make a target prediction at inference. Currently, most
existing data poisoning-based attacks struggle to achieve success at low
poisoning ratios, increasing the risk of being defended by defense methods. …

arxiv attack attacker backdoor classification data emerging image inject networks neural networks packet scenario security security threat threat training training data trigger work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Premium Hub - CoE: Business Process Senior Consultant, SAP Security Role and Authorisations & GRC

@ SAP | Dublin 24, IE, D24WA02

Product Security Response Engineer

@ Intel | CRI - Belen, Heredia

Application Security Architect

@ Uni Systems | Brussels, Brussels, Belgium

Sr Product Security Engineer

@ ServiceNow | Hyderabad, India

Analyst, Cybersecurity & Technology (Initial Application Deadline May 20th, Final Deadline May 31st)

@ FiscalNote | United Kingdom (UK)