March 20, 2023, 1:10 a.m. | Yifan Yan, Xudong Pan, Mi Zhang, Min Yang

cs.CR updates on arXiv.org arxiv.org

Copyright protection for deep neural networks (DNNs) is an urgent need for AI
corporations. To trace illegally distributed model copies, DNN watermarking is
an emerging technique for embedding and verifying secret identity messages in
the prediction behaviors or the model internals. Sacrificing less functionality
and involving more knowledge about the target DNN, the latter branch called
\textit{white-box DNN watermarking} is believed to be accurate, credible and
secure against most known watermark removal attacks, with emerging research
efforts in both the …

box called copyright corporations deep learning distributed emerging identity knowledge messages networks neural networks obfuscation prediction protection secret target trace under urgent watermarking

Information Security Engineers

@ D. E. Shaw Research | New York City

Cyber Security Professional

@ BT Group | 25A DLF City Phase-III,, Gurugram, India

Head of Cyber Security Operations

@ Vector Limited | Auckland, New Zealand

Security Analyst (SOC)

@ Accesa & RaRo | Cluj-Napoca, Romania

Senior - IT Compliance-Cyber (Technology Risk Consulting)

@ EY | Bengaluru, KA, IN, 560016

Security Automation Engineer Internship - Secure Development Team Product Security (Brno, Czech Republic)

@ Red Hat | Brno, Czechia