all InfoSec news
Exploring Machine Learning Privacy/Utility trade-off from a hyperparameters Lens. (arXiv:2303.01819v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Machine Learning (ML) architectures have been applied to several applications
that involve sensitive data, where a guarantee of users' data privacy is
required. Differentially Private Stochastic Gradient Descent (DPSGD) is the
state-of-the-art method to train privacy-preserving models. However, DPSGD
comes at a considerable accuracy loss leading to sub-optimal privacy/utility
trade-offs. Towards investigating new ground for better privacy-utility
trade-off, this work questions; (i) if models' hyperparameters have any
inherent impact on ML models' privacy-preserving properties, and (ii) if
models' hyperparameters have …
accuracy applications art data data privacy guarantee impact loss machine machine learning privacy private questions sensitive data state trade train utility work