April 23, 2024, 4:10 a.m. | Hongyu Zhu, Sichu Liang, Wentao Hu, Fangqi Li, Ju Jia, Shilin Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.13518v1 Announce Type: new
Abstract: With the rise of Machine Learning as a Service (MLaaS) platforms,safeguarding the intellectual property of deep learning models is becoming paramount. Among various protective measures, trigger set watermarking has emerged as a flexible and effective strategy for preventing unauthorized model distribution. However, this paper identifies an inherent flaw in the current paradigm of trigger set watermarking: evasion adversaries can readily exploit the shortcuts created by models memorizing watermark samples that deviate from the main task …

arxiv cs.ai cs.cr deep learning defending distribution evasion intellectual property machine machine learning paramount platforms property safeguarding service strategy theft trigger unauthorized watermarking

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Technical Support Specialist (Cyber Security)

@ Sigma Software | Warsaw, Poland

OT Security Specialist

@ Adani Group | AHMEDABAD, GUJARAT, India

FS-EGRC-Manager-Cloud Security

@ EY | Bengaluru, KA, IN, 560048