all InfoSec news
Training Private Models That Know What They Don't Know. (arXiv:2305.18393v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Training reliable deep learning models which avoid making overconfident but
incorrect predictions is a longstanding challenge. This challenge is further
exacerbated when learning has to be differentially private: protection provided
to sensitive data comes at the price of injecting additional randomness into
the learning process. In this work, we conduct a thorough empirical
investigation of selective classifiers -- that can abstain when they are unsure
-- under a differential privacy constraint. We find that several popular
selective prediction approaches are …
challenge data deep learning don making predictions private process protection randomness sensitive data training work