all InfoSec news
Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion
April 23, 2024, 4:10 a.m. | Hongyu Zhu, Sichu Liang, Wentao Hu, Fangqi Li, Ju Jia, Shilin Wang
cs.CR updates on arXiv.org arxiv.org
Abstract: With the rise of Machine Learning as a Service (MLaaS) platforms,safeguarding the intellectual property of deep learning models is becoming paramount. Among various protective measures, trigger set watermarking has emerged as a flexible and effective strategy for preventing unauthorized model distribution. However, this paper identifies an inherent flaw in the current paradigm of trigger set watermarking: evasion adversaries can readily exploit the shortcuts created by models memorizing watermark samples that deviate from the main task …
arxiv cs.ai cs.cr deep learning defending distribution evasion intellectual property machine machine learning paramount platforms property safeguarding service strategy theft trigger unauthorized watermarking
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Security Engineer - Detection and Response
@ Fastly, Inc. | US (Remote)
Application Security Engineer
@ Solidigm | Zapopan, Mexico
Defensive Cyber Operations Engineer-Mid
@ ISYS Technologies | Aurora, CO, United States
Manager, Information Security GRC
@ OneTrust | Atlanta, Georgia
Senior Information Security Analyst | IAM
@ EBANX | Curitiba or São Paulo
Senior Information Security Engineer, Cloud Vulnerability Research
@ Google | New York City, USA; New York, USA