March 21, 2024, 4:10 a.m. | Ziyao Liu, Huanyi Ye, Chen Chen, Kwok-Yan Lam

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.13682v1 Announce Type: new
Abstract: Recently, Machine Unlearning (MU) has gained considerable attention for its potential to improve AI safety by removing the influence of specific data from trained Machine Learning (ML) models. This process, known as knowledge removal, addresses concerns about data such as sensitivity, copyright restrictions, obsolescence, or low quality. This capability is also crucial for ensuring compliance with privacy regulations such as the Right To Be Forgotten (RTBF). Therefore, strategic knowledge removal mitigates the risk of harmful …

addresses ai safety arxiv attacks attention copyright cs.ai cs.cr data defenses influence knowledge low machine machine learning process restrictions safety survey threats

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cybersecurity Engineer

@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client

Invoice Compliance Reviewer

@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote

Technical Program Manager II - Compliance

@ Microsoft | Redmond, Washington, United States

Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence

@ Moonshot | Washington, District of Columbia, United States

Customer Engineer, Security, Public Sector

@ Google | Virginia, USA; Illinois, USA