June 13, 2022, 1:20 a.m. | Huiying Li, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng

cs.CR updates on arXiv.org arxiv.org

Backdoors are powerful attacks against deep neural networks (DNNs). By
poisoning training data, attackers can inject hidden rules (backdoors) into
DNNs, which only activate on inputs containing attack-specific triggers. While
existing work has studied backdoor attacks on a variety of DNN models, they
only consider static models, which remain unchanged after initial deployment.


In this paper, we study the impact of backdoor attacks on a more realistic
scenario of time-varying DNN models, where model weights are updated
periodically to handle …

attacks backdoor

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium