June 17, 2024, 4:18 a.m. | Hanna Foerster, Robert Mullins, Ilia Shumailov, Jamie Hayes

cs.CR updates on arXiv.org arxiv.org

arXiv:2406.10011v1 Announce Type: cross
Abstract: Deep neural networks, costly to train and rich in intellectual property value, are increasingly threatened by model extraction attacks that compromise their confidentiality. Previous attacks have succeeded in reverse-engineering model parameters up to a precision of float64 for models trained on random data with at most three hidden layers using cryptanalytical techniques. However, the process was identified to be very time consuming and not feasible for larger and deeper models trained on standard benchmarks. Our …

arxiv attacks beyond compromise confidentiality cs.ai cs.cr cs.lg data engineering extraction fidelity hidden high intellectual property model extraction networks neural networks property random reverse slow train value

Information Technology Specialist I: Windows Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, California

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Vice President, Controls Design & Development-7

@ State Street | Quincy, Massachusetts

Vice President, Controls Design & Development-5

@ State Street | Quincy, Massachusetts

Data Scientist & AI Prompt Engineer

@ Varonis | Israel

Contractor

@ Birlasoft | INDIA - MUMBAI - BIRLASOFT OFFICE, IN