May 19, 2023, 1:10 a.m. | Bin Fang, Bo Li, Shuang Wu, Ran Yi, Shouhong Ding, Lizhuang Ma

cs.CR updates on arXiv.org arxiv.org

The unauthorized use of personal data for commercial purposes and the
clandestine acquisition of private data for training machine learning models
continue to raise concerns. In response to these issues, researchers have
proposed availability attacks that aim to render data unexploitable. However,
many current attack methods are rendered ineffective by adversarial training.
In this paper, we re-examine the concept of unlearnable examples and discern
that the existing robust error-minimizing noise presents an inaccurate
optimization objective. Building on these observations, we …

acquisition aim attack attacks availability commercial continue current data machine machine learning machine learning models networks neural networks personal personal data private private data researchers response thinking training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Digital Trust Cyber Transformation Senior

@ KPMG India | Mumbai, Maharashtra, India

Security Consultant, Assessment Services - SOC 2 | Remote US

@ Coalfire | United States

Sr. Systems Security Engineer

@ Effectual | Washington, DC

Cyber Network Engineer

@ SonicWall | Woodbridge, Virginia, United States

Security Architect

@ Nokia | Belgium