all InfoSec news
Corrective Machine Unlearning
Feb. 22, 2024, 5:11 a.m. | Shashwat Goel, Ameya Prabhu, Philip Torr, Ponnurangam Kumaraguru, Amartya Sanyal
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine Learning models increasingly face data integrity challenges due to the use of large-scale training datasets drawn from the internet. We study what model developers can do if they detect that some data was manipulated or incorrect. Such manipulated data can cause adverse effects like vulnerability to backdoored samples, systematic biases, and in general, reduced accuracy on certain input domains. Often, all manipulated training samples are not known, and only a small, representative subset of …
arxiv biases can challenges cs.ai cs.cr cs.cv cs.lg data data integrity datasets detect developers integrity internet large machine machine learning machine learning models scale study training vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
DevSecOps Engineer
@ LinQuest | Beavercreek, Ohio, United States
Senior Developer, Vulnerability Collections (Contractor)
@ SecurityScorecard | Remote (Turkey or Latin America)
Cyber Security Intern 03416 NWSOL
@ North Wind Group | RICHLAND, WA
Senior Cybersecurity Process Engineer
@ Peraton | Fort Meade, MD, United States
Sr. Manager, Cybersecurity and Info Security
@ AESC | Smyrna, TN 37167, Smyrna, TN, US | Santa Clara, CA 95054, Santa Clara, CA, US | Florence, SC 29501, Florence, SC, US | Bowling Green, KY 42101, Bowling Green, KY, US