Nov. 23, 2023, 2:19 a.m. | Yixin Liu, Kaidi Xu, Xun Chen, Lichao Sun

cs.CR updates on arXiv.org arxiv.org

The open source of large amounts of image data promotes the development of
deep learning techniques. Along with this comes the privacy risk of these
open-source image datasets being exploited by unauthorized third parties to
train deep learning models for commercial or illegal purposes. To avoid the
abuse of public data, a poisoning-based technique, the unlearnable example, is
proposed to significantly degrade the generalization performance of models by
adding a kind of imperceptible noise to the data. To further enhance …

commercial data datasets deep learning development error exploited illegal image large noise open source privacy privacy risk risk robustness techniques third third parties train

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)