all InfoSec news
Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples. (arXiv:2205.04145v2 [cs.CR] UPDATED)
May 11, 2022, 1:20 a.m. | Lina Lin, Hanzhou Wu
cs.CR updates on arXiv.org arxiv.org
With the widespread use of deep neural networks (DNNs) in many areas, more
and more studies focus on protecting DNN models from intellectual property (IP)
infringement. Many existing methods apply digital watermarking to protect the
DNN models. The majority of them either embed a watermark directly into the
internal network structure/parameters or insert a zero-bit watermark by
fine-tuning a model to be protected with a set of so-called trigger samples.
Though these methods work very well, they were designed for …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist II: Network Architect
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Cybersecurity Skills Challenge -- Sponsored by DoD
@ Correlation One | United States
Security Operations Center (SOC) Analyst
@ GK Cybersecurity Group | Remote
Cyber Threat Defense - PAM Manager
@ PwC | Amsterdam - Thomas R. Malthusstraat 5
InfoSec Specialist
@ Deutsche Bank | Bucharest
DevSecOps Engineer
@ Swiss Re | Bengaluru, KA, IN