July 21, 2022, 1:20 a.m. | Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, Nicolas Papernot

cs.CR updates on arXiv.org arxiv.org

In model extraction attacks, adversaries can steal a machine learning model
exposed via a public API by repeatedly querying it and adjusting their own
model based on obtained predictions. To prevent model stealing, existing
defenses focus on detecting malicious queries, truncating, or distorting
outputs, thus necessarily introducing a tradeoff between robustness and model
utility for legitimate users. Instead, we propose to impede model extraction by
requiring users to complete a proof-of-work before they can read the model's
predictions. This deters …

cost work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

L2-Network Security Administrator

@ Kyndryl | KIN51515 Mumbai (KIN51515) We Work

Head of Cybersecurity Advisory and Architecture

@ CMA CGM | Marseille, FR

Systems Engineers/Cyber Security Engineers/Information Systems Security Engineer

@ KDA Consulting Inc | Herndon, Virginia, United States

R&D DevSecOps Staff Software Development Engineer 1

@ Sopra Steria | Noida, Uttar Pradesh, India