Aug. 3, 2023, 1:10 a.m. | Xiaobei Yan, Xiaoxuan Lou, Guowen Xu, Han Qiu, Shangwei Guo, Chip Hong Chang, Tianwei Zhang

cs.CR updates on arXiv.org arxiv.org

DNN accelerators have been widely deployed in many scenarios to speed up the
inference process and reduce the energy consumption. One big concern about the
usage of the accelerators is the confidentiality of the deployed models: model
inference execution on the accelerators could leak side-channel information,
which enables an adversary to preciously recover the model details. Such model
extraction attacks can not only compromise the intellectual property of DNN
models, but also facilitate some adversarial attacks.


Although previous works have …

accelerator attack automated big channel confidentiality deep learning energy information leak mercury nvidia process side-channel side-channel attack speed speed up

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)

@ WWC Global | Reston, Virginia, United States

Security Architect (DevSecOps)

@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium

Infrastructure Security Architect

@ Ørsted | Kuala Lumpur, MY

Contract Penetration Tester

@ Evolve Security | United States - Remote

Senior Penetration Tester

@ DigitalOcean | Canada