July 28, 2022, 1:20 a.m. | Abhishek Chakraborty, Daniel Xing, Yuntao Liu, Ankur Srivastava

cs.CR updates on arXiv.org arxiv.org

The functionality of a deep learning (DL) model can be stolen via model
extraction where an attacker obtains a surrogate model by utilizing the
responses from a prediction API of the original model. In this work, we propose
a novel watermarking technique called DynaMarks to protect the intellectual
property (IP) of DL models against such model extraction attacks in a black-box
setting. Unlike existing approaches, DynaMarks does not alter the training
process of the original model but rather embeds watermark …

deep learning dynamic

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Officer Level 1 (L1)

@ NTT DATA | Virginia, United States of America

Alternance - Analyste VOC - Cybersécurité - Île-De-France

@ Sopra Steria | Courbevoie, France

Senior Security Researcher, SIEM

@ Huntress | Remote US or Remote CAN

Cyber Security Engineer Lead

@ ASSYSTEM | Bridgwater, United Kingdom