May 24, 2024, 4:12 a.m. | Patrik Vel\v{c}ick\'y, Jakub Breier, Xiaolu Hou, Mladen Kova\v{c}evi\'c

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.13891v1 Announce Type: new
Abstract: Fault injection attacks are a potent threat against embedded implementations of neural network models. Several attack vectors have been proposed, such as misclassification, model extraction, and trojan/backdoor planting. Most of these attacks work by flipping bits in the memory where quantized model parameters are stored.
In this paper, we introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode. We experimentally evaluate our proposal with several publicly available models and datasets, by …

arxiv attack attacks attack vectors backdoor bits cs.ai cs.cr embedded encoding extraction flip injection injection attacks memory model extraction network networks neural network neural networks protection threat trojan work

Ingénieur Développement Logiciel IoT H/F

@ Socomec Group | Benfeld, Grand Est, France

Architecte Cloud – Lyon

@ Sopra Steria | Limonest, France

Senior Risk Operations Analyst

@ Visa | Austin, TX, United States

Military Orders Writer

@ Advanced Technology Leaders, Inc. | Ft Eisenhower, GA, US

Senior Golang Software Developer (f/m/d)

@ E.ON | Essen, DE

Senior Revenue Operations Analyst (Redwood City)

@ Anomali | Redwood City, CA