April 6, 2022, 1:20 a.m. | Qi Zhong, Leo Yu Zhang, Shengshan Hu, Longxiang Gao, Jun Zhang, Yong Xiang

cs.CR updates on arXiv.org arxiv.org

Fine-tuning attacks are effective in removing the embedded watermarks in deep
learning models. However, when the source data is unavailable, it is
challenging to just erase the watermark without jeopardizing the model
performance. In this context, we introduce Attention Distraction (AD), a novel
source data-free watermark removal attack, to make the model selectively forget
the embedded watermarks by customizing continual learning. In particular, AD
first anchors the model's attention on the main task using some unlabeled data.
Then, through continual …

attention

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Cloud Security Engineer

@ Pacific Gas and Electric Company | Oakland, CA, US, 94612

Penetration Tester (Level 2)

@ Verve Group | Pune, Mahārāshtra, India

Senior Security Operations Engineer (Azure)

@ Jamf | US Remote

(Junior) Cyber Security Consultant IAM (m/w/d)

@ Atos | Berlin, DE, D-13353