March 21, 2024, 4:10 a.m. | Ziyao Liu, Huanyi Ye, Chen Chen, Kwok-Yan Lam

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.13682v1 Announce Type: new
Abstract: Recently, Machine Unlearning (MU) has gained considerable attention for its potential to improve AI safety by removing the influence of specific data from trained Machine Learning (ML) models. This process, known as knowledge removal, addresses concerns about data such as sensitivity, copyright restrictions, obsolescence, or low quality. This capability is also crucial for ensuring compliance with privacy regulations such as the Right To Be Forgotten (RTBF). Therefore, strategic knowledge removal mitigates the risk of harmful …

addresses ai safety arxiv attacks attention copyright cs.ai cs.cr data defenses influence knowledge low machine machine learning process restrictions safety survey threats

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC