June 17, 2024, 4:18 a.m. | Hanna Foerster, Robert Mullins, Ilia Shumailov, Jamie Hayes

cs.CR updates on arXiv.org arxiv.org

arXiv:2406.10011v1 Announce Type: cross
Abstract: Deep neural networks, costly to train and rich in intellectual property value, are increasingly threatened by model extraction attacks that compromise their confidentiality. Previous attacks have succeeded in reverse-engineering model parameters up to a precision of float64 for models trained on random data with at most three hidden layers using cryptanalytical techniques. However, the process was identified to be very time consuming and not feasible for larger and deeper models trained on standard …

arxiv attacks beyond compromise confidentiality cs.ai cs.cr cs.lg data engineering extraction fidelity hidden high intellectual property model extraction networks neural networks property random reverse slow train value

Palo Alto Engineer

@ Booz Allen Hamilton | Undisclosed Location - USA, VA, Reston

Systems Administrator

@ Cognosante | Camp Humphreys, South Korea

Consultor de Seguridad de la Información (Future project)

@ Unisys | Home Based Peru

Cloud DevOps Engineer

@ Booz Allen Hamilton | USA, VA, McLean (8283 Greensboro Dr, Hamilton)

Cloud DevOps Engineer

@ Booz Allen Hamilton | USA, VA, McLean (8251 Greensboro Dr)

Sr. Systems Administrator

@ KBR, Inc. | USA, Colorado Springs, 2424 Garden of the Gods Rd, Colorado