all InfoSec news
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking. (arXiv:2207.13321v1 [cs.CR])
July 28, 2022, 1:20 a.m. | Abhishek Chakraborty, Daniel Xing, Yuntao Liu, Ankur Srivastava
cs.CR updates on arXiv.org arxiv.org
The functionality of a deep learning (DL) model can be stolen via model
extraction where an attacker obtains a surrogate model by utilizing the
responses from a prediction API of the original model. In this work, we propose
a novel watermarking technique called DynaMarks to protect the intellectual
property (IP) of DL models against such model extraction attacks in a black-box
setting. Unlike existing approaches, DynaMarks does not alter the training
process of the original model but rather embeds watermark …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 7 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 7 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 7 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 7 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Security Officer Level 1 (L1)
@ NTT DATA | Virginia, United States of America
Alternance - Analyste VOC - Cybersécurité - Île-De-France
@ Sopra Steria | Courbevoie, France
Senior Security Researcher, SIEM
@ Huntress | Remote US or Remote CAN
Cyber Security Engineer Lead
@ ASSYSTEM | Bridgwater, United Kingdom