Nov. 5, 2023, 6:10 a.m. | Raphael Joud, Pierre-Alain Moellic, Simon Pontie, Jean-Baptiste Rigaud

cs.CR updates on arXiv.org arxiv.org

Model extraction is a growing concern for the security of AI systems. For
deep neural network models, the architecture is the most important information
an adversary aims to recover. Being a sequence of repeated computation blocks,
neural network models deployed on edge-devices will generate distinctive
side-channel leakages. The latter can be exploited to extract critical
information when targeted platforms are physically accessible. By combining
theoretical knowledge about deep learning practices and analysis of a
widespread implementation library (ARM CMSIS-NN), our …

adversary analysis architecture book computation devices edge important information microcontrollers model extraction network neural network power recover security simple systems

Principal Security Engineer

@ Elsevier | Home based-Georgia

Infrastructure Compliance Engineer

@ NVIDIA | US, CA, Santa Clara

Information Systems Security Engineer (ISSE) / Cybersecurity SME

@ Green Cell Consulting | Twentynine Palms, CA, United States

Sales Security Analyst

@ Everbridge | Bengaluru

Alternance – Analyste Threat Intelligence – Cybersécurité - Île-de-France

@ Sopra Steria | Courbevoie, France

Third Party Cyber Risk Analyst

@ Chubb | Philippines