Web: http://arxiv.org/abs/2209.06300

Sept. 15, 2022, 1:20 a.m. | William Hackett, Stefan Trawicki, Zhengxin Yu, Neeraj Suri, Peter Garraghan

cs.CR updates on arXiv.org arxiv.org

Deep Learning (DL) models increasingly power a diversity of applications.
Unfortunately, this pervasiveness also makes them attractive targets for
extraction attacks which can steal the architecture, parameters, and
hyper-parameters of a targeted DL model. Existing extraction attack studies
have observed varying levels of attack success for different DL models and
datasets, yet the underlying cause(s) behind their susceptibility often remain
unclear. Ascertaining such root-cause weaknesses would help facilitate secure
DL systems, though this requires studying extraction attacks in a wide …

adversarial attack deep learning framework

Chief Information Security Officer

@ Los Angeles Unified School District | Los Angeles

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Sr. Malware Researcher - Windows Software Engineer

@ SentinelOne | Brno, South Moravian, Czech Republic

Senior Cyber Security Incident Response Analyst

@ ServiceNow | Dublin, Ireland

Staff, Privacy Compliance Monitoring

@ Coupang | Seoul, South Korea

VULNERABILITY MANAGER

@ Security Bank | Makati, Makati, Philippines

Cyber Security Analyst

@ Avery Dennison | Bengaluru/Remote, India

Security Incident Response Manager (Remote, Americas)

@ Shopify | Dallas, TX, United States

Sr. Compliance Specialist (Screening)

@ Coupang | Seoul, South Korea