Feb. 17, 2022, 8:20 a.m. | Zhenting Wang, Hailun Ding, Juan Zhai, Shiqing Ma

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNNs) can learn Trojans (or backdoors) from benign or
poisoned data, which raises security concerns of using them. By exploiting such
Trojans, the adversary can add a fixed input space perturbation to any given
input to mislead the model predicting certain outputs (i.e., target labels). In
this paper, we analyze such input space Trojans in DNNs, and propose a theory
to explain the relationship of a model's decision regions and Trojans: a
complete and accurate Trojan corresponds …

analysis domain input lg mitigation network neural network trojans

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote