all InfoSec news
Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion
April 23, 2024, 4:10 a.m. | Hongyu Zhu, Sichu Liang, Wentao Hu, Fangqi Li, Ju Jia, Shilin Wang
cs.CR updates on arXiv.org arxiv.org
Abstract: With the rise of Machine Learning as a Service (MLaaS) platforms,safeguarding the intellectual property of deep learning models is becoming paramount. Among various protective measures, trigger set watermarking has emerged as a flexible and effective strategy for preventing unauthorized model distribution. However, this paper identifies an inherent flaw in the current paradigm of trigger set watermarking: evasion adversaries can readily exploit the shortcuts created by models memorizing watermark samples that deviate from the main task …
arxiv cs.ai cs.cr deep learning defending distribution evasion intellectual property machine machine learning paramount platforms property safeguarding service strategy theft trigger unauthorized watermarking
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Technical Support Specialist (Cyber Security)
@ Sigma Software | Warsaw, Poland
OT Security Specialist
@ Adani Group | AHMEDABAD, GUJARAT, India
FS-EGRC-Manager-Cloud Security
@ EY | Bengaluru, KA, IN, 560048