April 17, 2023, 1:12 a.m. | Jianhong Pan, Lin Geng Foo, Qichen Zheng, Zhipeng Fan, Hossein Rahmani, Qiuhong Ke, Jun Liu

cs.CR updates on arXiv.org arxiv.org

Dynamic neural networks can greatly reduce computation redundancy without
compromising accuracy by adapting their structures based on the input. In this
paper, we explore the robustness of dynamic neural networks against
energy-oriented attacks targeted at reducing their efficiency. Specifically, we
attack dynamic models with our novel algorithm GradMDM. GradMDM is a technique
that adjusts the direction and the magnitude of the gradients to effectively
find a small perturbation for each input, that will activate more computational
units of dynamic models …

accuracy adversarial algorithm attack attacks computation computational datasets dynamic effectively efficiency energy find input magnitude networks neural networks novel redundancy robustness

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Audit and Compliance Technical Analyst

@ Accenture Federal Services | Washington, DC

ICS Cyber Threat Intelligence Analyst

@ STEMBoard | Arlington, Virginia, United States

Cyber Operations Analyst

@ Peraton | Arlington, VA, United States

Cybersecurity – Information System Security Officer (ISSO)

@ Boeing | USA - Annapolis Junction, MD

Network Security Engineer I - Weekday Afternoons

@ Deepwatch | Remote