July 4, 2024, 11:02 a.m. | Trishita Tiwari, Suchin Gururangan, Chuan Guo, Weizhe Hua, Sanjay Kariyappa, Udit Gupta, Wenjie Xiong, Kiwan Maeng, Hsien-Hsin S. Lee, G. Edward Suh

cs.CR updates on arXiv.org arxiv.org

arXiv:2306.03235v2 Announce Type: replace-cross
Abstract: In today's machine learning (ML) models, any part of the training data can affect the model output. This lack of control for information flow from training data to model output is a major obstacle in training models on sensitive data when access control only allows individual users to access a subset of data. To enable secure machine learning for access-controlled data, we propose the notion of information flow control for machine learning, and …

access access control architecture arxiv can control cs.cr cs.lg data flow information machine machine learning major modular sensitive sensitive data today training training data

Cyber Security Project Engineer

@ Dezign Concepts LLC | Chantilly, VA

Cloud Cybersecurity Incident Response Lead

@ Maveris | Martinsburg, West Virginia, United States

Sr Staff Security Researcher (Malware Research - Antivirus Systems)

@ Palo Alto Networks | Santa Clara, CA, United States

Identity & Access Management, Senior Associate

@ PwC | Toronto - 18 York Street

Senior Manager, AI Security

@ Lloyds Banking Group | London 10 Gresham Street

Senior Red Team Engineer

@ Adobe | Remote California