Oct. 4, 2023, 1:21 a.m. | Rui Min, Zeyu Qin, Li Shen, Minhao Cheng

cs.CR updates on arXiv.org arxiv.org

It has been widely observed that deep neural networks (DNN) are vulnerable to
backdoor attacks where attackers could manipulate the model behavior
maliciously by tampering with a small set of training samples. Although a line
of defense methods is proposed to mitigate this threat, they either require
complicated modifications to the training process or heavily rely on the
specific model architecture, which makes them hard to deploy into real-world
applications. Therefore, in this paper, we instead start with fine-tuning, one …

attackers attacks backdoor backdoor attacks defense feature modifications networks neural networks tampering threat training vulnerable

Security Operations Engineer

@ Nokia | India

Machine Learning DevSecOps Engineer

@ Ford Motor Company | Mexico City, MEX, Mexico

Cybersecurity Defense Analyst 2

@ IDEMIA | Casablanca, MA, 20270

Executive, IT Security

@ CIMB | Cambodia

Cloud Security Architect - Microsoft (m/w/d)

@ Bertelsmann | Gütersloh, NW, DE, 33333

Senior Consultant, Cybersecurity - SOC

@ NielsenIQ | Chennai, India