all InfoSec news
Reliable Learning for Test-time Attacks and Distribution Shift. (arXiv:2304.03370v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Machine learning algorithms are often used in environments which are not
captured accurately even by the most carefully obtained training data, either
due to the possibility of `adversarial' test-time attacks, or on account of
`natural' distribution shift. For test-time attacks, we introduce and analyze a
novel robust reliability guarantee, which requires a learner to output
predictions along with a reliability radius $\eta$, with the meaning that its
prediction is guaranteed to be correct as long as the adversary has not …
account adversarial adversary algorithms attacks data distribution environments guarantee machine machine learning machine learning algorithms novel point prediction predictions radius reliability test training