Nov. 28, 2022, 2:10 a.m. | Seonhye Park, Alsharif Abuadbba, Shuo Wang, Kristen Moore, Yansong Gao, Hyoungshick Kim, Surya Nepal

cs.CR updates on arXiv.org arxiv.org

Training highly performant deep neural networks (DNNs) typically requires the
collection of a massive dataset and the use of powerful computing resources.
Therefore, unauthorized redistribution of private pre-trained DNNs may cause
severe economic loss for model owners. For protecting the ownership of DNN
models, DNN watermarking schemes have been proposed by embedding secret
information in a DNN model and verifying its presence for model ownership.
However, existing DNN watermarking schemes compromise the model utility and are
vulnerable to watermark removal …

networks neural networks tracking

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Deputy Chief Information Security Officer

@ City of Philadelphia | Philadelphia, PA, United States

Global Cybersecurity Expert

@ CMA CGM | Mumbai, IN

Senior Security Operations Engineer

@ EarnIn | Mexico

Cyber Technologist (Sales Engineer)

@ Darktrace | London