April 10, 2023, 1:10 a.m. | Jonah O'Brien Weiss, Tiago Alves, Sandip Kundu

cs.CR updates on arXiv.org arxiv.org

Deep Neural Networks (DNNs) have become ubiquitous due to their performance
on prediction and classification problems. However, they face a variety of
threats as their usage spreads. Model extraction attacks, which steal DNNs,
endanger intellectual property, data privacy, and security. Previous research
has shown that system-level side-channels can be used to leak the architecture
of a victim DNN, exacerbating these risks. We propose two DNN architecture
extraction techniques catering to various threat models. The first technique
uses a malicious, dynamically …

architecture attack attacks classification data data privacy gpu intellectual property leak malicious networks neural networks performance prediction privacy problems profiles pytorch research risks security steal system techniques threat threat models threats version victim

Cybersecurity Consultant

@ Devoteam | Cité Mahrajène, Tunisia

GTI Manager of Cybersecurity Operations

@ Grant Thornton | Phoenix, AZ, United States

(Senior) Director of Information Governance, Risk, and Compliance

@ SIXT | Munich, Germany

Information System Security Engineer

@ Space Dynamics Laboratory | North Logan, UT

Intelligence Specialist (Threat/DCO) - Level 3

@ Constellation Technologies | Fort Meade, MD

Cybersecurity GRC Specialist (On-site)

@ EnerSys | Reading, PA, US, 19605