all InfoSec news
Beyond Labeling Oracles: What does it mean to steal ML models?
June 14, 2024, 4:19 a.m. | Avital Shafran, Ilia Shumailov, Murat A. Erdogdu, Nicolas Papernot
cs.CR updates on arXiv.org arxiv.org
Abstract: Model extraction attacks are designed to steal trained models with only query access, as is often provided through APIs that ML-as-a-Service providers offer. Machine Learning (ML) models are expensive to train, in part because data is hard to obtain, and a primary incentive for model extraction is to acquire a model while incurring less cost than training from scratch. Literature on model extraction commonly claims or presumes that the attacker is able to save on …
access apis arxiv as-a-service attacks beyond cs.cr cs.lg data extraction hard labeling machine machine learning ml models model extraction offer query service service providers steal train
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Technology Specialist I: Windows Engineer
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, California
Information Technology Specialist I, LACERA: Information Security Engineer
@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA
Solutions Expert
@ General Dynamics Information Technology | USA MD Home Office (MDHOME)
Physical Security Specialist
@ The Aerospace Corporation | Chantilly
System Administrator
@ General Dynamics Information Technology | USA VA Newington - Customer Proprietary (VAC395)
Microsoft Exchange & 365 Systems Engineer - TS/SCI with Polygraph
@ General Dynamics Information Technology | USA VA Chantilly - 14700 Lee Rd (VAS100)