March 20, 2024, 4:11 a.m. | Cheng-Long Wang, Qi Li, Zihang Xiang, Di Wang

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.12830v1 Announce Type: cross
Abstract: The growing concerns surrounding data privacy and security have underscored the critical necessity for machine unlearning--aimed at fully removing data lineage from machine learning models. MLaaS providers expect this to be their ultimate safeguard for regulatory compliance. Despite its critical importance, the pace at which privacy communities have been developing and implementing strong methods to verify the effectiveness of machine unlearning has been disappointingly slow, with this vital area often receiving insufficient focus. This paper …

arxiv auditing compliance critical cs.cr cs.lg data data lineage data privacy expect machine machine learning machine learning models privacy privacy and security regulatory regulatory compliance safeguard security side effects

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)