all InfoSec news
Beyond Labeling Oracles: What does it mean to steal ML models?. (arXiv:2310.01959v1 [cs.LG])
cs.CR updates on arXiv.org arxiv.org
Model extraction attacks are designed to steal trained models with only query
access, as is often provided through APIs that ML-as-a-Service providers offer.
ML models are expensive to train, in part because data is hard to obtain, and a
primary incentive for model extraction is to acquire a model while incurring
less cost than training from scratch. Literature on model extraction commonly
claims or presumes that the attacker is able to save on both data acquisition
and labeling costs. We …
access apis as-a-service attacks beyond data hard labeling ml models model extraction offer query service service providers steal train