Nov. 23, 2023, 2:19 a.m. | Yixin Liu, Kaidi Xu, Xun Chen, Lichao Sun

cs.CR updates on arXiv.org arxiv.org

The open source of large amounts of image data promotes the development of
deep learning techniques. Along with this comes the privacy risk of these
open-source image datasets being exploited by unauthorized third parties to
train deep learning models for commercial or illegal purposes. To avoid the
abuse of public data, a poisoning-based technique, the unlearnable example, is
proposed to significantly degrade the generalization performance of models by
adding a kind of imperceptible noise to the data. To further enhance …

commercial data datasets deep learning development error exploited illegal image large noise open source privacy privacy risk risk robustness techniques third third parties train

Security Operations Program Manager

@ Microsoft | Redmond, Washington, United States

Sr. Network Security engineer

@ NXP Semiconductors | Bengaluru (Nagavara)

DevSecOps Engineer

@ RP Pro Services | Washington, District of Columbia, United States

Consultant RSSI H/F

@ Hifield | Sèvres, France

TW Senior Test Automation Engineer (Access Control & Intrusion Systems)

@ Bosch Group | Taipei, Taiwan

Cyber Security, Senior Manager

@ Triton AI Pte Ltd | Singapore, Singapore, Singapore