all InfoSec news
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile. (arXiv:2306.15790v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Differential privacy (DP) is the prevailing technique for protecting user
data in machine learning models. However, deficits to this framework include a
lack of clarity for selecting the privacy budget $\epsilon$ and a lack of
quantification for the privacy leakage for a particular data row by a
particular trained model. We make progress toward these limitations and a new
perspective by which to visualize DP results by studying a privacy metric that
quantifies the extent to which a model trained …
budget data differential privacy framework machine machine learning machine learning models ml models privacy profile protecting quantification transition user data